857 resultados para Computing Classification Systems
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The increase of publicly available sequencing data has allowed for rapid progress in our understanding of genome composition. As new information becomes available we should constantly be updating and reanalyzing existing and newly acquired data. In this report we focus on transposable elements (TEs) which make up a significant portion of nearly all sequenced genomes. Our ability to accurately identify and classify these sequences is critical to understanding their impact on host genomes. At the same time, as we demonstrate in this report, problems with existing classification schemes have led to significant misunderstandings of the evolution of both TE sequences and their host genomes. In a pioneering publication Finnegan (1989) proposed classifying all TE sequences into two classes based on transposition mechanisms and structural features: the retrotransposons (class I) and the DNA transposons (class II). We have retraced how ideas regarding TE classification and annotation in both prokaryotic and eukaryotic scientific communities have changed over time. This has led us to observe that: (1) a number of TEs have convergent structural features and/or transposition mechanisms that have led to misleading conclusions regarding their classification, (2) the evolution of TEs is similar to that of viruses by having several unrelated origins, (3) there might be at least 8 classes and 12 orders of TEs including 10 novel orders. In an effort to address these classification issues we propose: (1) the outline of a universal TE classification, (2) a set of methods and classification rules that could be used by all scientific communities involved in the study of TEs, and (3) a 5-year schedule for the establishment of an International Committee for Taxonomy of Transposable Elements (ICTTE).
Resumo:
Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.
Resumo:
This study explores, in 3 steps, how the 3 main library classification systems, the Library of Congress Classification, the Dewey Decimal Classification, and the Universal Decimal Classification, cover human knowledge. First, we mapped the knowledge covered by the 3 systems. We used the 10 Pillars of Knowledge: Map of Human Knowledge, which comprises 10 pillars, as an evaluative model. We mapped all the subject-based classes and subclasses that are part of the first 2 levels of the 3 hierarchical structures. Then, we zoomed into each of the 10 pillars and analyzed how the three systems cover the 10 knowledge domains. Finally, we focused on the 3 library systems. Based on the way each one of them covers the 10 knowledge domains, it is evident that they failed to adequately and systematically present contemporary human knowledge. They are unsystematic and biased, and, at the top 2 levels of the hierarchical structures, they are incomplete.
Resumo:
Mode of access: Internet.
Resumo:
"An environmental protection publication in the solid waste management series (SW-171)."
Resumo:
"June 1992."
Resumo:
A collection of photocopies of documents, chiefly from univerisity libraries, assembled by the Systems and Precedures Exchange Center, and accompanied by SPEC flyer no. 85, June 1982.
Resumo:
Systems biology is based on computational modelling and simulation of large networks of interacting components. Models may be intended to capture processes, mechanisms, components and interactions at different levels of fidelity. Input data are often large and geographically disperse, and may require the computation to be moved to the data, not vice versa. In addition, complex system-level problems require collaboration across institutions and disciplines. Grid computing can offer robust, scaleable solutions for distributed data, compute and expertise. We illustrate some of the range of computational and data requirements in systems biology with three case studies: one requiring large computation but small data (orthologue mapping in comparative genomics), a second involving complex terabyte data (the Visible Cell project) and a third that is both computationally and data-intensive (simulations at multiple temporal and spatial scales). Authentication, authorisation and audit systems are currently not well scalable and may present bottlenecks for distributed collaboration particularly where outcomes may be commercialised. Challenges remain in providing lightweight standards to facilitate the penetration of robust, scalable grid-type computing into diverse user communities to meet the evolving demands of systems biology.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
Prognostic procedures can be based on ranked linear models. Ranked regression type models are designed on the basis of feature vectors combined with set of relations defined on selected pairs of these vectors. Feature vectors are composed of numerical results of measurements on particular objects or events. Ranked relations defined on selected pairs of feature vectors represent additional knowledge and can reflect experts' opinion about considered objects. Ranked models have the form of linear transformations of feature vectors on a line which preserve a given set of relations in the best manner possible. Ranked models can be designed through the minimization of a special type of convex and piecewise linear (CPL) criterion functions. Some sets of ranked relations cannot be well represented by one ranked model. Decomposition of global model into a family of local ranked models could improve representation. A procedures of ranked models decomposition is described in this paper.
Resumo:
Objective: The aim was to compare there ulcer classification systems as predictors of the outcome of diabetic foot ulcers; the Wagner, the University of Texas (UT) and the size (area, depth), sepsis, arteriopathy, denervation system (S(AD)SAD) systems in specialist clinic in Brazil. Methods: Ulcer area, depth, appearance, infection and associated ischaemia and neuropathy were recorded in a consecutive series of 94 subjects. A novel score, the S(AD)SAD score, was derived from the sum of individual items of the S(AD)SAD system, and was evaluated. Follow-up was for at least 6 months. The primary outcome measure was the incidence of healing. Results: Mean age was 57.6 years; 57 (60.6%) were made. Forty-eight ulcers (51.1%) healed without surgery; 11 (12.2%) subjects underwent minor amputation. Significant differences in terms of healing were observed for depth (P = 0.002), infection (P = 0.006) and denervation (P = 0.002) using the S(AD)SAD system, for UT grade (P = 0.002) and stage (P = 0.032) and for Wagner grades (P = 0.002). Ulcers with an S(AD)SAD score of <= 9 (total possible 15) were 7.6 times more likely to heal than scores >= 10 (P < 0.001). Conclusions: All three systems predicted ulcer outcome. The S(AD)SAD score of ulcer severity could represent a useful addition to routine clinical practice. The association between outcome and ulcer depth confirms earlier reports. The association with infection was stronger than that reported from the centres in Europe or North America. The very strong association with neuropathy has only previously been observed in Tanzania. Studies designed to compare the outcome in different countries should adopt systems of classification, which are valid for the populations studied.
Resumo:
Objective: The aim was to compare there ulcer classification systems as predictors of the outcome of diabetic foot ulcers; the Wagner, the University of Texas (UT) and the size (area, depth), sepsis, arteriopathy, denervation system (S(AD)SAD) systems in specialist clinic in Brazil.Methods: Ulcer area, depth, appearance, infection and associated ischaemia and neuropathy were recorded in a consecutive series of 94 subjects. A novel score, the S(AD)SAD score, was derived from the sum of individual items of the S(AD)SAD system, and was evaluated. Follow-up was for at least 6 months. The primary outcome measure was the incidence of healing.Results: Mean age was 57.6 years; 57 (60.6%) were made. Forty-eight ulcers (51.1%) healed without surgery; 11 (12.2%) subjects underwent minor amputation. Significant differences in terms of healing were observed for depth (P = 0.002), infection (P = 0.006) and denervation (P = 0.002) using the S(AD)SAD system, for UT grade (P = 0.002) and stage (P = 0.032) and for Wagner grades (P = 0.002). Ulcers with an S(AD)SAD score of <= 9 (total possible 15) were 7.6 times more likely to heal than scores >= 10 (P < 0.001).Conclusions: All three systems predicted ulcer outcome. The S(AD)SAD score of ulcer severity could represent a useful addition to routine clinical practice. The association between outcome and ulcer depth confirms earlier reports. The association with infection was stronger than that reported from the centres in Europe or North America. The very strong association with neuropathy has only previously been observed in Tanzania. Studies designed to compare the outcome in different countries should adopt systems of classification, which are valid for the populations studied.
Resumo:
This book provides the latest in a series of books growing out of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering. It includes chapters in the most advanced areas of Computing, Informatics, Systems Sciences and Engineering. It has accessible to a wide range of readership, including professors, researchers, practitioners and students. This book includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Informatics, and Systems Sciences, and Engineering. It includes selected papers form the conference proceedings of the Ninth International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2013). Coverage includes topics in: Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.