989 resultados para Cross-classification
Resumo:
BACKGROUND: With the International Classification of Functioning, Disability and Health (ICF), we can now rely on a globally agreed-upon framework and system for classifying the typical spectrum of problems in the functioning of persons given the environmental context in which they live. ICF Core Sets are subgroups of ICF items selected to capture those aspects of functioning that are most likely to be affected by sleep disorders. OBJECTIVE: The objective of this paper is to outline the developmental process for the ICF Core Sets for Sleep. METHODS: The ICF Core Sets for Sleep will be defined at an ICF Core Sets Consensus Conference, which will integrate evidence from preliminary studies, namely (a) a systematic literature review regarding the outcomes used in clinical trials and observational studies, (b) focus groups with people in different regions of the world who have sleep disorders, (c) an expert survey with the involvement of international clinical experts, and (d) a cross-sectional study of people with sleep disorders in different regions of the world. CONCLUSION: The ICF Core Sets for Sleep are being designed with the goal of providing useful standards for research, clinical practice and teaching. It is hypothesized that the ICF Core Sets for Sleep will stimulate research that leads to an improved understanding of functioning, disability, and health in sleep medicine. It is of further hope that such research will lead to interventions and accommodations that improve the restoration and maintenance of functioning and minimize disability among people with sleep disorders throughout the world.
Resumo:
Accurate screening for anemia at Red Cross blood donor clinics is essential to maintain a safe national blood supply. Despite the importance of identifying anemia correctly by measurement of hemoglobin or hematocrit (hemoglobin/hematocrit) there is no consensus regarding the efficacy of the current two stage screening method which uses the Readacrit$\sp{\rm tm}$ microhematocrit in conjunction with copper sulfate.^ A cross-sectional study was implemented in which hemoglobin/hematocrit was measured, with the present method and four new devices, on 504 prospective blood donors at a Canadian Red Cross permanent blood donor clinic in London, Canada. Concurrently gathered, venous and capillary blood samples were tested by each device and compared to Coulter S IV$\sp{\rm tm}$ determined venous standard readings. Instrument hemoglobin/hematocrit means were statistically calibrated to the standard ones in order to appraise systematic deviations from the standard. Classification analysis was employed to assess concordance between each instrument and the standard when classifying prospective donors as anemic or non-anemic. This was done both when each instrument was used alone (single stage) and when copper sulfate was used as a preliminary screen (two stage) and simulated over a range of anemia prevalences. The Hemoximeter$\sp{\rm tm}$ and Compur M1000$\sp{\rm tm}$ devices had the highest correlations of hemoglobin measurements with the standard ones for both capillary (n.s.) and venous blood (p $<$.05). Analysis of variance (anova) also showed them to be the most accurate (p $<$.05), as did both single and two stage classification analysis, therefore, they are both recommended. There was a smaller difference between instruments for two stage than for single stage screening; therefore instrument choice is less crucial for the former. The present method was adequate for two stage screening as tested but simulations showed that it would discriminate poorly in populations with a higher prevalence of anemia. The Stat-crit and Readacrit, which measure hematocrit, became less accurate at crucial low hematocrit levels. In light of this finding and the introduction of new, effective and easy to use hemoglobin measuring instruments, the continued use of hematocrit as a surrogate for hemoglobin, is not recommended. ^
Resumo:
Cervical cancer is the leading cause of death and disease from malignant neoplasms among women in developing countries. Even though the Pap smear has significantly decreased the number of deaths from cervical cancer in the past years, it has its limitations. Researchers have developed an automated screening machine which can potentially detect abnormal cases that are overlooked by conventional screening. The goal of quantitative cytology is to classify the patient's tissue sample based on quantitative measurements of the individual cells. It is also much cheaper and potentially can take less time. One of the major challenges of collecting cells with a cytobrush is the possibility of not sampling any existing dysplastic cells on the cervix. Being able to correctly classify patients who have disease without the presence of dysplastic cells could improve the accuracy of quantitative cytology algorithms. Subtle morphologic changes in normal-appearing tissues adjacent to or distant from malignant tumors have been shown to exist, but a comparison of various statistical methods, including many recent advances in the statistical learning field, has not previously been done. The objective of this thesis is to use different classification methods applied to quantitative cytology data for the detection of malignancy associated changes (MACs). In this thesis, Elastic Net is the best algorithm. When we applied the Elastic Net algorithm to the test set, we combined the training set and validation set as "training" set and used 5-fold cross validation to choose the parameter for Elastic Net. It has a sensitivity of 47% at 80% specificity, an AUC 0.52, and a partial AUC 0.10 (95% CI 0.09-0.11).^
Resumo:
INTRODUCTION: Objective assessment of motor skills has become an important challenge in minimally invasive surgery (MIS) training.Currently, there is no gold standard defining and determining the residents' surgical competence.To aid in the decision process, we analyze the validity of a supervised classifier to determine the degree of MIS competence based on assessment of psychomotor skills METHODOLOGY: The ANFIS is trained to classify performance in a box trainer peg transfer task performed by two groups (expert/non expert). There were 42 participants included in the study: the non-expert group consisted of 16 medical students and 8 residents (< 10 MIS procedures performed), whereas the expert group consisted of 14 residents (> 10 MIS procedures performed) and 4 experienced surgeons. Instrument movements were captured by means of the Endoscopic Video Analysis (EVA) tracking system. Nine motion analysis parameters (MAPs) were analyzed, including time, path length, depth, average speed, average acceleration, economy of area, economy of volume, idle time and motion smoothness. Data reduction was performed by means of principal component analysis, and then used to train the ANFIS net. Performance was measured by leave one out cross validation. RESULTS: The ANFIS presented an accuracy of 80.95%, where 13 experts and 21 non-experts were correctly classified. Total root mean square error was 0.88, while the area under the classifiers' ROC curve (AUC) was measured at 0.81. DISCUSSION: We have shown the usefulness of ANFIS for classification of MIS competence in a simple box trainer exercise. The main advantage of using ANFIS resides in its continuous output, which allows fine discrimination of surgical competence. There are, however, challenges that must be taken into account when considering use of ANFIS (e.g. training time, architecture modeling). Despite this, we have shown discriminative power of ANFIS for a low-difficulty box trainer task, regardless of the individual significances between MAPs. Future studies are required to confirm the findings, inclusion of new tasks, conditions and sample population.
Resumo:
Background Objective assessment of psychomotor skills has become an important challenge in the training of minimally invasive surgical (MIS) techniques. Currently, no gold standard defining surgical competence exists for classifying residents according to their surgical skills. Supervised classification has been proposed as a means for objectively establishing competence thresholds in psychomotor skills evaluation. This report presents a study comparing three classification methods for establishing their validity in a set of tasks for basic skills’ assessment. Methods Linear discriminant analysis (LDA), support vector machines (SVM), and adaptive neuro-fuzzy inference systems (ANFIS) were used. A total of 42 participants, divided into an experienced group (4 expert surgeons and 14 residents with >10 laparoscopic surgeries performed) and a nonexperienced group (16 students and 8 residents with <10 laparoscopic surgeries performed), performed three box trainer tasks validated for assessment of MIS psychomotor skills. Instrument movements were captured using the TrEndo tracking system, and nine motion analysis parameters (MAPs) were analyzed. The performance of the classifiers was measured by leave-one-out cross-validation using the scores obtained by the participants. Results The mean accuracy performances of the classifiers were 71 % (LDA), 78.2 % (SVM), and 71.7 % (ANFIS). No statistically significant differences in the performance were identified between the classifiers. Conclusions The three proposed classifiers showed good performance in the discrimination of skills, especially when information from all MAPs and tasks combined were considered. A correlation between the surgeons’ previous experience and their execution of the tasks could be ascertained from results. However, misclassifications across all the classifiers could imply the existence of other factors influencing psychomotor competence.
Resumo:
The iProClass database is an integrated resource that provides comprehensive family relationships and structural and functional features of proteins, with rich links to various databases. It is extended from ProClass, a protein family database that integrates PIR superfamilies and PROSITE motifs. The iProClass currently consists of more than 200 000 non-redundant PIR and SWISS-PROT proteins organized with more than 28 000 superfamilies, 2600 domains, 1300 motifs, 280 post-translational modification sites and links to more than 30 databases of protein families, structures, functions, genes, genomes, literature and taxonomy. Protein and family summary reports provide rich annotations, including membership information with length, taxonomy and keyword statistics, full family relationships, comprehensive enzyme and PDB cross-references and graphical feature display. The database facilitates classification-driven annotation for protein sequence databases and complete genomes, and supports structural and functional genomic research. The iProClass is implemented in Oracle 8i object-relational system and available for sequence search and report retrieval at http://pir.georgetow n.edu/iproclass/.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
This study aimed to replicate and cross-validate the Rapid Screen of Concussion (RSC) for diagnosing mild TBI (mTBI). One hundred (81 male, 19 female) cases of mTBI and 35 (23 male and 12 female) cases of orthopaedic injuries were tested within 24 hr of injury. Double cross-validation was used to examine whether total RSC scores obtained in the cur-rent sample, generalised to one previously reported. In the new sample, mTBI patients answered fewer orientation questions, recalled fewer words on the learning trial and after a delay, judged fewer sentences in 2 min, and completed fewer symbols in the Digit Symbol Substitution Test than orthopaedic controls. The formulae and cut-offs developed on the original and new samples produced similar sensitivity and overall correct classification rates. Inclusion of the Digit Symbol Substitution Test performance of the new sample improved the sensitivity (80.2%) and specificity (82.6%) in males. It did not improve the correct classification rate in females, which was 89.5% sensitivity and 91.7% specificity before the inclusion of the Digit Symbol Substitution Test. Taken together, these results indicate that a combined score on this 12-min screen yields a measure of level of brain impairment up to 24 hr after mTBI.
Resumo:
Risk assessment systems for introduced species are being developed and applied globally, but methods for rigorously evaluating them are still in their infancy. We explore classification and regression tree models as an alternative to the current Australian Weed Risk Assessment system, and demonstrate how the performance of screening tests for unwanted alien species may be quantitatively compared using receiver operating characteristic (ROC) curve analysis. The optimal classification tree model for predicting weediness included just four out of a possible 44 attributes of introduced plants examined, namely: (i) intentional human dispersal of propagules; (ii) evidence of naturalization beyond native range; (iii) evidence of being a weed elsewhere; and (iv) a high level of domestication. Intentional human dispersal of propagules in combination with evidence of naturalization beyond a plants native range led to the strongest prediction of weediness. A high level of domestication in combination with no evidence of naturalization mitigated the likelihood of an introduced plant becoming a weed resulting from intentional human dispersal of propagules. Unlikely intentional human dispersal of propagules combined with no evidence of being a weed elsewhere led to the lowest predicted probability of weediness. The failure to include intrinsic plant attributes in the model suggests that either these attributes are not useful general predictors of weediness, or data and analysis were inadequate to elucidate the underlying relationship(s). This concurs with the historical pessimism that we will ever be able to accurately predict invasive plants. Given the apparent importance of propagule pressure (the number of individuals of an species released), future attempts at evaluating screening model performance for identifying unwanted plants need to account for propagule pressure when collating and/or analysing datasets. The classification tree had a cross-validated sensitivity of 93.6% and specificity of 36.7%. Based on the area under the ROC curve, the performance of the classification tree in correctly classifying plants as weeds or non-weeds was slightly inferior (Area under ROC curve = 0.83 +/- 0.021 (+/- SE)) to that of the current risk assessment system in use (Area under ROC curve = 0.89 +/- 0.018 (+/- SE)), although requires many fewer questions to be answered.
Resumo:
Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.
Resumo:
Using firm-level data from nine developing countries, we demonstrate that certain institutions, like restrictive labour market regulations, that are considered bad for economic growth might be beneficial for production efficiency, whereas good business environment, which is considered beneficial for economic growth, might have an adverse impact on production efficiency. We argue that our results suggest that there might be significant difference in the macro- and micro-impacts of institutional quality, such that the classification of institutions into 'good' and 'bad might be premature. © The Author 2013. Published by Oxford University Press on behalf of the Cambridge Political Economy Society. All rights reserved.
Resumo:
ACM Computing Classification System (1998): J.3.
Resumo:
This research is to establish new optimization methods for pattern recognition and classification of different white blood cells in actual patient data to enhance the process of diagnosis. Beckman-Coulter Corporation supplied flow cytometry data of numerous patients that are used as training sets to exploit the different physiological characteristics of the different samples provided. The methods of Support Vector Machines (SVM) and Artificial Neural Networks (ANN) were used as promising pattern classification techniques to identify different white blood cell samples and provide information to medical doctors in the form of diagnostic references for the specific disease states, leukemia. The obtained results prove that when a neural network classifier is well configured and trained with cross-validation, it can perform better than support vector classifiers alone for this type of data. Furthermore, a new unsupervised learning algorithm---Density based Adaptive Window Clustering algorithm (DAWC) was designed to process large volumes of data for finding location of high data cluster in real-time. It reduces the computational load to ∼O(N) number of computations, and thus making the algorithm more attractive and faster than current hierarchical algorithms.
Resumo:
Recommendation systems aim to help users make decisions more efficiently. The most widely used method in recommendation systems is collaborative filtering, of which, a critical step is to analyze a user's preferences and make recommendations of products or services based on similarity analysis with other users' ratings. However, collaborative filtering is less usable for recommendation facing the "cold start" problem, i.e. few comments being given to products or services. To tackle this problem, we propose an improved method that combines collaborative filtering and data classification. We use hotel recommendation data to test the proposed method. The accuracy of the recommendation is determined by the rankings. Evaluations regarding the accuracies of Top-3 and Top-10 recommendation lists using the 10-fold cross-validation method and ROC curves are conducted. The results show that the Top-3 hotel recommendation list proposed by the combined method has the superiority of the recommendation performance than the Top-10 list under the cold start condition in most of the times.
Resumo:
L’augmentation de la croissance des réseaux, des blogs et des utilisateurs des sites d’examen sociaux font d’Internet une énorme source de données, en particulier sur la façon dont les gens pensent, sentent et agissent envers différentes questions. Ces jours-ci, les opinions des gens jouent un rôle important dans la politique, l’industrie, l’éducation, etc. Alors, les gouvernements, les grandes et petites industries, les instituts universitaires, les entreprises et les individus cherchent à étudier des techniques automatiques fin d’extraire les informations dont ils ont besoin dans les larges volumes de données. L’analyse des sentiments est une véritable réponse à ce besoin. Elle est une application de traitement du langage naturel et linguistique informatique qui se compose de techniques de pointe telles que l’apprentissage machine et les modèles de langue pour capturer les évaluations positives, négatives ou neutre, avec ou sans leur force, dans des texte brut. Dans ce mémoire, nous étudions une approche basée sur les cas pour l’analyse des sentiments au niveau des documents. Notre approche basée sur les cas génère un classificateur binaire qui utilise un ensemble de documents classifies, et cinq lexiques de sentiments différents pour extraire la polarité sur les scores correspondants aux commentaires. Puisque l’analyse des sentiments est en soi une tâche dépendante du domaine qui rend le travail difficile et coûteux, nous appliquons une approche «cross domain» en basant notre classificateur sur les six différents domaines au lieu de le limiter à un seul domaine. Pour améliorer la précision de la classification, nous ajoutons la détection de la négation comme une partie de notre algorithme. En outre, pour améliorer la performance de notre approche, quelques modifications innovantes sont appliquées. Il est intéressant de mentionner que notre approche ouvre la voie à nouveaux développements en ajoutant plus de lexiques de sentiment et ensembles de données à l’avenir.