878 resultados para ICF CLASSIFICATION
Resumo:
The Fuzzy ART system introduced herein incorporates computations from fuzzy set theory into ART 1. For example, the intersection (n) operator used in ART 1 learning is replaced by the MIN operator (A) of fuzzy set theory. Fuzzy ART reduces to ART 1 in response to binary input vectors, but can also learn stable categories in response to analog input vectors. In particular, the MIN operator reduces to the intersection operator in the binary case. Learning is stable because all adaptive weights can only decrease in time. A preprocessing step, called complement coding, uses on-cell and off-cell responses to prevent category proliferation. Complement coding normalizes input vectors while preserving the amplitudes of individual feature activations.
Resumo:
A new family of neural network architectures is presented. This family of architectures solves the problem of constructing and training minimal neural network classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a neural network expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a Very Large Scale Integrated (VLSI) circuit. The rules that the neural network generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.
Resumo:
In this work, we investigate tennis stroke recognition using a single inertial measuring unit attached to a player’s forearm during a competitive match. This paper evaluates the best approach for stroke detection using either accelerometers, gyroscopes or magnetometers, which are embedded into the inertial measuring unit. This work concludes what is the optimal training data set for stroke classification and proves that classifiers can perform well when tested on players who were not used to train the classifier. This work provides a significant step forward for our overall goal, which is to develop next generation sports coaching tools using both inertial and visual sensors in an instrumented indoor sporting environment.
Resumo:
As a by-product of the ‘information revolution’ which is currently unfolding, lifetimes of man (and indeed computer) hours are being allocated for the automated and intelligent interpretation of data. This is particularly true in medical and clinical settings, where research into machine-assisted diagnosis of physiological conditions gains momentum daily. Of the conditions which have been addressed, however, automated classification of allergy has not been investigated, even though the numbers of allergic persons are rising, and undiagnosed allergies are most likely to elicit fatal consequences. On the basis of the observations of allergists who conduct oral food challenges (OFCs), activity-based analyses of allergy tests were performed. Algorithms were investigated and validated by a pilot study which verified that accelerometer-based inquiry of human movements is particularly well-suited for objective appraisal of activity. However, when these analyses were applied to OFCs, accelerometer-based investigations were found to provide very poor separation between allergic and non-allergic persons, and it was concluded that the avenues explored in this thesis are inadequate for the classification of allergy. Heart rate variability (HRV) analysis is known to provide very significant diagnostic information for many conditions. Owing to this, electrocardiograms (ECGs) were recorded during OFCs for the purpose of assessing the effect that allergy induces on HRV features. It was found that with appropriate analysis, excellent separation between allergic and nonallergic subjects can be obtained. These results were, however, obtained with manual QRS annotations, and these are not a viable methodology for real-time diagnostic applications. Even so, this was the first work which has categorically correlated changes in HRV features to the onset of allergic events, and manual annotations yield undeniable affirmation of this. Fostered by the successful results which were obtained with manual classifications, automatic QRS detection algorithms were investigated to facilitate the fully automated classification of allergy. The results which were obtained by this process are very promising. Most importantly, the work that is presented in this thesis did not obtain any false positive classifications. This is a most desirable result for OFC classification, as it allows complete confidence to be attributed to classifications of allergy. Furthermore, these results could be particularly advantageous in clinical settings, as machine-based classification can detect the onset of allergy which can allow for early termination of OFCs. Consequently, machine-based monitoring of OFCs has in this work been shown to possess the capacity to significantly and safely advance the current state of clinical art of allergy diagnosis
Resumo:
The electroencephalogram (EEG) is an important noninvasive tool used in the neonatal intensive care unit (NICU) for the neurologic evaluation of the sick newborn infant. It provides an excellent assessment of at-risk newborns and formulates a prognosis for long-term neurologic outcome.The automated analysis of neonatal EEG data in the NICU can provide valuable information to the clinician facilitating medical intervention. The aim of this thesis is to develop a system for automatic classification of neonatal EEG which can be mainly divided into two parts: (1) classification of neonatal EEG seizure from nonseizure, and (2) classifying neonatal background EEG into several grades based on the severity of the injury using atomic decomposition. Atomic decomposition techniques use redundant time-frequency dictionaries for sparse signal representations or approximations. The first novel contribution of this thesis is the development of a novel time-frequency dictionary coherent with the neonatal EEG seizure states. This dictionary was able to track the time-varying nature of the EEG signal. It was shown that by using atomic decomposition and the proposed novel dictionary, the neonatal EEG transition from nonseizure to seizure states could be detected efficiently. The second novel contribution of this thesis is the development of a neonatal seizure detection algorithm using several time-frequency features from the proposed novel dictionary. It was shown that the time-frequency features obtained from the atoms in the novel dictionary improved the seizure detection accuracy when compared to that obtained from the raw EEG signal. With the assistance of a supervised multiclass SVM classifier and several timefrequency features, several methods to automatically grade EEG were explored. In summary, the novel techniques proposed in this thesis contribute to the application of advanced signal processing techniques for automatic assessment of neonatal EEG recordings.
Resumo:
Gliomagenesis is driven by a complex network of genetic alterations and while the glioma genome has been a focus of investigation for many years; critical gaps in our knowledge of this disease remain. The identification of novel molecular biomarkers remains a focus of the greater cancer community as a method to improve the consistency and accuracy of pathological diagnosis. In addition, novel molecular biomarkers are drastically needed for the identification of targets that may ultimately result in novel therapeutics aimed at improving glioma treatment. Through the identification of new biomarkers, laboratories will focus future studies on the molecular mechanisms that underlie glioma development. Here, we report a series of genomic analyses identifying novel molecular biomarkers in multiple histopathological subtypes of glioma and refine the classification of malignant gliomas. We have completed a large scale analysis of the WHO grade II-III astrocytoma exome and report frequent mutations in the chromatin modifier, alpha thalassemia mental retardation x-linked (
Resumo:
Detailed phenotypic characterization of B cell subpopulations is of utmost importance for the diagnosis and management of humoral immunodeficiencies, as they are used for classification of common variable immunodeficiencies. Since age-specific reference values remain scarce in the literature, we analysed by flow cytometry the proportions and absolute values of total, memory, switched memory and CD21(-/low) B cells in blood samples from 168 healthy children (1 day to 18 years) with special attention to the different subpopulations of CD21(low) B cells. The percentages of total memory B cells and their subsets significantly increased up to 5-10 years. In contrast, the percentages of immature CD21(-) B cells and of immature transitional CD21(low)CD38(hi) B cells decreased progressively with age, whereas the percentage of CD21(low) CD38(low) B cells remained stable during childhood. Our data stress the importance of age-specific reference values for the correct interpretation of B cell subsets in children as a diagnostic tool in immunodeficiencies.
Resumo:
p.103-111
Resumo:
p.103-111
Resumo:
Serial Analysis of Gene Expression (SAGE) is a relatively new method for monitoring gene expression levels and is expected to contribute significantly to the progress in cancer treatment by enabling a precise and early diagnosis. A promising application of SAGE gene expression data is classification of tumors. In this paper, we build three event models (the multivariate Bernoulli model, the multinomial model and the normalized multinomial model) for SAGE data classification. Both binary classification and multicategory classification are investigated. Experiments on two SAGE datasets show that the multivariate Bernoulli model performs well with small feature sizes, but the multinomial performs better at large feature sizes, while the normalized multinomial performs well with medium feature sizes. The multinomial achieves the highest overall accuracy.
Resumo:
Composite resins and glass-ionomer cements were introduced to dentistry in the 1960s and 1970s, respectively. Since then, there has been a series of modifications to both materials as well as the development other groups claiming intermediate characteristics between the two. The result is a confusion of materials leading to selection problems. While both materials are tooth-colored, there is a considerable difference in their properties, and it is important that each is used in the appropriate situation. Composite resin materials are esthetic and now show acceptable physical strength and wear resistance. However, they are hydrophobic, and therefore more difficult to handle in the oral environment, and cannot support ion migration. Also, the problems of gaining long-term adhesion to dentin have yet to be overcome. On the other hand, glass ionomers are water-based and therefore have the potential for ion migration, both inward and outward from the restoration, leading to a number of advantages. However, they lack the physical properties required for use in load-bearing areas. A logical classification designed to differentiate the materials was first published by McLean et al in 1994, but in the last 15 years, both types of material have undergone further research and modification. This paper is designed to bring the classification up to date so that the operator can make a suitable, evidence-based, choice when selecting a material for any given situation.
Resumo:
Automatic taxonomic categorisation of 23 species of dinoflagellates was demonstrated using field-collected specimens. These dinoflagellates have been responsible for the majority of toxic and noxious phytoplankton blooms which have occurred in the coastal waters of the European Union in recent years and make severe impact on the aquaculture industry. The performance by human 'expert' ecologists/taxonomists in identifying these species was compared to that achieved by 2 artificial neural network classifiers (multilayer perceptron and radial basis function networks) and 2 other statistical techniques, k-Nearest Neighbour and Quadratic Discriminant Analysis. The neural network classifiers outperform the classical statistical techniques. Over extended trials, the human experts averaged 85% while the radial basis network achieved a best performance of 83%, the multilayer perceptron 66%, k-Nearest Neighbour 60%, and the Quadratic Discriminant Analysis 56%.