955 resultados para Multiple classification
Resumo:
How do humans rapidly recognize a scene? How can neural models capture this biological competence to achieve state-of-the-art scene classification? The ARTSCENE neural system classifies natural scene photographs by using multiple spatial scales to efficiently accumulate evidence for gist and texture. ARTSCENE embodies a coarse-to-fine Texture Size Ranking Principle whereby spatial attention processes multiple scales of scenic information, ranging from global gist to local properties of textures. The model can incrementally learn and predict scene identity by gist information alone and can improve performance through selective attention to scenic textures of progressively smaller size. ARTSCENE discriminates 4 landscape scene categories (coast, forest, mountain and countryside) with up to 91.58% correct on a test set, outperforms alternative models in the literature which use biologically implausible computations, and outperforms component systems that use either gist or texture information alone. Model simulations also show that adjacent textures form higher-order features that are also informative for scene recognition.
Resumo:
Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Single-channel Fusion ARTMAP is functionally equivalent to Fuzzy ART during unsupervised learning and to Fuzzy ARTMAP during supervised learning. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, become inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called paraellel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of them. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network. Fusion ARTMAP's multi-channel coding is illustrated by simulations of the Quadruped Mammal database.
Resumo:
Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Fusion ARTMAP generalizes the fuzzy ARTMAP architecture in order to adaptively classify multi-channel data. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, beco1ne inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called parallel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of thmn. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network.
Resumo:
Gliomagenesis is driven by a complex network of genetic alterations and while the glioma genome has been a focus of investigation for many years; critical gaps in our knowledge of this disease remain. The identification of novel molecular biomarkers remains a focus of the greater cancer community as a method to improve the consistency and accuracy of pathological diagnosis. In addition, novel molecular biomarkers are drastically needed for the identification of targets that may ultimately result in novel therapeutics aimed at improving glioma treatment. Through the identification of new biomarkers, laboratories will focus future studies on the molecular mechanisms that underlie glioma development. Here, we report a series of genomic analyses identifying novel molecular biomarkers in multiple histopathological subtypes of glioma and refine the classification of malignant gliomas. We have completed a large scale analysis of the WHO grade II-III astrocytoma exome and report frequent mutations in the chromatin modifier, alpha thalassemia mental retardation x-linked (
Resumo:
The paper considers scheduling problems for parallel dedicated machines subject to resource constraints. A fairly complete computational complexity classification is obtained, a number of polynomial-time algorithms are designed. For the problem with a fixed number of machines in which a job uses at most one resource of unit size a polynomial-time approximation scheme is offered.
Resumo:
Agglomerative cluster analyses encompass many techniques, which have been widely used in various fields of science. In biology, and specifically ecology, datasets are generally highly variable and may contain outliers, which increase the difficulty to identify the number of clusters. Here we present a new criterion to determine statistically the optimal level of partition in a classification tree. The criterion robustness is tested against perturbated data (outliers) using an observation or variable with values randomly generated. The technique, called Random Simulation Test (RST), is tested on (1) the well-known Iris dataset [Fisher, R.A., 1936. The use of multiple measurements in taxonomic problems. Ann. Eugenic. 7, 179–188], (2) simulated data with predetermined numbers of clusters following Milligan and Cooper [Milligan, G.W., Cooper, M.C., 1985. An examination of procedures for determining the number of clusters in a data set. Psychometrika 50, 159–179] and finally (3) is applied on real copepod communities data previously analyzed in Beaugrand et al. [Beaugrand, G., Ibanez, F., Lindley, J.A., Reid, P.C., 2002. Diversity of calanoid copepods in the North Atlantic and adjacent seas: species associations and biogeography. Mar. Ecol. Prog. Ser. 232, 179–195]. The technique is compared to several standard techniques. RST performed generally better than existing algorithms on simulated data and proved to be especially efficient with highly variable datasets.
Resumo:
Sponge classification has long been based mainly on morphocladistic analyses but is now being greatly challenged by more than 12 years of accumulated analyses of molecular data analyses. The current study used phylogenetic hypotheses based on sequence data from 18S rRNA, 28S rRNA, and the CO1 barcoding fragment, combined with morphology to justify the resurrection of the order Axinellida Lévi, 1953. Axinellida occupies a key position in different morphologically derived topologies. The abandonment of Axinellida and the establishment of Halichondrida Vosmaer, 1887 sensu lato to contain Halichondriidae Gray, 1867, Axinellidae Carter, 1875, Bubaridae Topsent, 1894, Heteroxyidae Dendy, 1905, and a new family Dictyonellidae van Soest et al., 1990 was based on the conclusion that an axially condensed skeleton evolved independently in separate lineages in preference to the less parsimonious assumption that asters (star-shaped spicules), acanthostyles (club-shaped spicules with spines), and sigmata (C-shaped spicules) each evolved more than once. Our new molecular trees are congruent and contrast with the earlier, morphologically based, trees. The results show that axially condensed skeletons, asters, acanthostyles, and sigmata are all homoplasious characters. The unrecognized homoplasious nature of these characters explains much of the incongruence between molecular-based and morphology-based phylogenies. We use the molecular trees presented here as a basis for re-interpreting the morphological characters within Heteroscleromorpha. The implications for the classification of Heteroscleromorpha are discussed and a new order Biemnida ord. nov. is erected.
Resumo:
Shoeprint evidence collected from crime scenes can play an important role in forensic investigations. Usually, the analysis of shoeprints is carried out manually and is based on human expertise and knowledge. As well as being error prone, such a manual process can also be time consuming; thus affecting the usability and suitability of shoeprint evidence in a court of law. Thus, an automatic system for classification and retrieval of shoeprints has the potential to be a valuable tool. This paper presents a solution for the automatic retrieval of shoeprints which is considerably more robust than existing solutions in the presence of geometric distortions such as scale, rotation and scale distortions. It addresses the issue of classifying partial shoeprints in the presence of rotation, scale and noise distortions and relies on the use of two local point-of-interest detectors whose matching scores are combined. In this work, multiscale Harris and Hessian detectors are used to select corners and blob-like structures in a scale-space representation for scale invariance, while Scale Invariant Feature Transform (SIFT) descriptor is employed to achieve rotation invariance. The proposed technique is based on combining the matching scores of the two detectors at the score level. Our evaluation has shown that it outperforms both detectors in most of our extended experiments when retrieving partial shoeprints with geometric distortions, and is clearly better than similar work published in the literature. We also demonstrate improved performance in the face of wear and tear. As matter of fact, whilst the proposed work outperforms similar algorithms in the literature, it is shown that achieving good retrieval performance is not constrained by acquiring a full print from a scene of crime as a partial print can still be used to attain comparable retrieval results to those of using the full print. This gives crime investigators more flexibility is choosing the parts of a print to search for in a database of footwear.
Resumo:
BACKGROUND: This study describes the prevalence, associated anomalies, and demographic characteristics of cases of multiple congenital anomalies (MCA) in 19 population-based European registries (EUROCAT) covering 959,446 births in 2004 and 2010. METHODS: EUROCAT implemented a computer algorithm for classification of congenital anomaly cases followed by manual review of potential MCA cases by geneticists. MCA cases are defined as cases with two or more major anomalies of different organ systems, excluding sequences, chromosomal and monogenic syndromes. RESULTS: The combination of an epidemiological and clinical approach for classification of cases has improved the quality and accuracy of the MCA data. Total prevalence of MCA cases was 15.8 per 10,000 births. Fetal deaths and termination of pregnancy were significantly more frequent in MCA cases compared with isolated cases (p < 0.001) and MCA cases were more frequently prenatally diagnosed (p < 0.001). Live born infants with MCA were more often born preterm (p < 0.01) and with birth weight < 2500 grams (p < 0.01). Respiratory and ear, face, and neck anomalies were the most likely to occur with other anomalies (34% and 32%) and congenital heart defects and limb anomalies were the least likely to occur with other anomalies (13%) (p < 0.01). However, due to their high prevalence, congenital heart defects were present in half of all MCA cases. Among males with MCA, the frequency of genital anomalies was significantly greater than the frequency of genital anomalies among females with MCA (p < 0.001). CONCLUSION: Although rare, MCA cases are an important public health issue, because of their severity. The EUROCAT database of MCA cases will allow future investigation on the epidemiology of these conditions and related clinical and diagnostic problems.
Resumo:
The purpose of this study was to investigate Howard Gardner's (1983) Multiple Intelligences theory, which proposes that there are eight independent intelligences: Linguistic, Spatial, Logical/Mathematical, Interpersonal, Intrapersonal, Naturalistic, Bodily-Kinesthetic, and Musical. To explore Gardner's theory, two measures of each ability area were administered to 200 participants. Each participant also completed a measure of general cognitive ability, a personality inventory, an ability self-rating scale, and an ability self-report questionnaire. Nonverbal measures were included for most intelligence domains, and a wide range of content was sampled in Gardner's domains. Results showed that all tests of purely cognitive abilities were significantly correlated with the measure of general cognitive ability, whereas Musical, Bodily-Kinesthetic, and one of the Intrapersonal measures were not. Contrary to what Multiple Intelligences theory would seem to predict, correlations among the tests revealed a positive manifold and factor analysis indicated a large factor of general intelligence, with a mathematical reasoning test and a classification task from the Naturalistic domain having the highest ^- loadings. There were only minor sex differences in performance on the ability tests. Participants' self-estimates of ability were significantly and positively correlated with actual performance in some, but not all, intelligences. With regard to personality, a hypothesized association between Openness to Experience and crystallized intelligence was supported. The implications of the findings in regards to the nature of mental abilities were discussed, and recommendations for further research were made.
Resumo:
Affiliation: Centre Robert-Cedergren de l'Université de Montréal en bio-informatique et génomique & Département de biochimie, Université de Montréal
Resumo:
Most panel unit root tests are designed to test the joint null hypothesis of a unit root for each individual series in a panel. After a rejection, it will often be of interest to identify which series can be deemed to be stationary and which series can be deemed nonstationary. Researchers will sometimes carry out this classification on the basis of n individual (univariate) unit root tests based on some ad hoc significance level. In this paper, we demonstrate how to use the false discovery rate (FDR) in evaluating I(1)=I(0) classifications based on individual unit root tests when the size of the cross section (n) and time series (T) dimensions are large. We report results from a simulation experiment and illustrate the methods on two data sets.
Resumo:
Dans l'apprentissage machine, la classification est le processus d’assigner une nouvelle observation à une certaine catégorie. Les classifieurs qui mettent en œuvre des algorithmes de classification ont été largement étudié au cours des dernières décennies. Les classifieurs traditionnels sont basés sur des algorithmes tels que le SVM et les réseaux de neurones, et sont généralement exécutés par des logiciels sur CPUs qui fait que le système souffre d’un manque de performance et d’une forte consommation d'énergie. Bien que les GPUs puissent être utilisés pour accélérer le calcul de certains classifieurs, leur grande consommation de puissance empêche la technologie d'être mise en œuvre sur des appareils portables tels que les systèmes embarqués. Pour rendre le système de classification plus léger, les classifieurs devraient être capable de fonctionner sur un système matériel plus compact au lieu d'un groupe de CPUs ou GPUs, et les classifieurs eux-mêmes devraient être optimisés pour ce matériel. Dans ce mémoire, nous explorons la mise en œuvre d'un classifieur novateur sur une plate-forme matérielle à base de FPGA. Le classifieur, conçu par Alain Tapp (Université de Montréal), est basé sur une grande quantité de tables de recherche qui forment des circuits arborescents qui effectuent les tâches de classification. Le FPGA semble être un élément fait sur mesure pour mettre en œuvre ce classifieur avec ses riches ressources de tables de recherche et l'architecture à parallélisme élevé. Notre travail montre que les FPGAs peuvent implémenter plusieurs classifieurs et faire les classification sur des images haute définition à une vitesse très élevée.
Resumo:
In this paper an attempt has been made to determine the number of Premature Ventricular Contraction (PVC) cycles accurately from a given Electrocardiogram (ECG) using a wavelet constructed from multiple Gaussian functions. It is difficult to assess the ECGs of patients who are continuously monitored over a long period of time. Hence the proposed method of classification will be helpful to doctors to determine the severity of PVC in a patient. Principal Component Analysis (PCA) and a simple classifier have been used in addition to the specially developed wavelet transform. The proposed wavelet has been designed using multiple Gaussian functions which when summed up looks similar to that of a normal ECG. The number of Gaussians used depends on the number of peaks present in a normal ECG. The developed wavelet satisfied all the properties of a traditional continuous wavelet. The new wavelet was optimized using genetic algorithm (GA). ECG records from Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) database have been used for validation. Out of the 8694 ECG cycles used for evaluation, the classification algorithm responded with an accuracy of 97.77%. In order to compare the performance of the new wavelet, classification was also performed using the standard wavelets like morlet, meyer, bior3.9, db5, db3, sym3 and haar. The new wavelet outperforms the rest
Resumo:
We discuss the problem of finding sparse representations of a class of signals. We formalize the problem and prove it is NP-complete both in the case of a single signal and that of multiple ones. Next we develop a simple approximation method to the problem and we show experimental results using artificially generated signals. Furthermore,we use our approximation method to find sparse representations of classes of real signals, specifically of images of pedestrians. We discuss the relation between our formulation of the sparsity problem and the problem of finding representations of objects that are compact and appropriate for detection and classification.