857 resultados para Computing Classification Systems
Resumo:
The aims of the project were twofold: 1) To investigate classification procedures for remotely sensed digital data, in order to develop modifications to existing algorithms and propose novel classification procedures; and 2) To investigate and develop algorithms for contextual enhancement of classified imagery in order to increase classification accuracy. The following classifiers were examined: box, decision tree, minimum distance, maximum likelihood. In addition to these the following algorithms were developed during the course of the research: deviant distance, look up table and an automated decision tree classifier using expert systems technology. Clustering techniques for unsupervised classification were also investigated. Contextual enhancements investigated were: mode filters, small area replacement and Wharton's CONAN algorithm. Additionally methods for noise and edge based declassification and contextual reclassification, non-probabilitic relaxation and relaxation based on Markov chain theory were developed. The advantages of per-field classifiers and Geographical Information Systems were investigated. The conclusions presented suggest suitable combinations of classifier and contextual enhancement, given user accuracy requirements and time constraints. These were then tested for validity using a different data set. A brief examination of the utility of the recommended contextual algorithms for reducing the effects of data noise was also carried out.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.
Resumo:
Text classification is essential for narrowing down the number of documents relevant to a particular topic for further pursual, especially when searching through large biomedical databases. Protein-protein interactions are an example of such a topic with databases being devoted specifically to them. This paper proposed a semi-supervised learning algorithm via local learning with class priors (LL-CP) for biomedical text classification where unlabeled data points are classified in a vector space based on their proximity to labeled nodes. The algorithm has been evaluated on a corpus of biomedical documents to identify abstracts containing information about protein-protein interactions with promising results. Experimental results show that LL-CP outperforms the traditional semisupervised learning algorithms such as SVMand it also performs better than local learning without incorporating class priors.
Resumo:
With careful calculation of signal forwarding weights, relay nodes can be used to work collaboratively to enhance downlink transmission performance by forming a virtual multiple-input multiple-output beamforming system. Although collaborative relay beamforming schemes for single user have been widely investigated for cellular systems in previous literatures, there are few studies on the relay beamforming for multiusers. In this paper, we study the collaborative downlink signal transmission with multiple amplify-and-forward relay nodes for multiusers in cellular systems. We propose two new algorithms to determine the beamforming weights with the same objective of minimizing power consumption of the relay nodes. In the first algorithm, we aim to guarantee the received signal-to-noise ratio at multiusers for the relay beamforming with orthogonal channels. We prove that the solution obtained by a semidefinite relaxation technology is optimal. In the second algorithm, we propose an iterative algorithm that jointly selects the base station antennas and optimizes the relay beamforming weights to reach the target signal-to-interference-and-noise ratio at multiusers with nonorthogonal channels. Numerical results validate our theoretical analysis and demonstrate that the proposed optimal schemes can effectively reduce the relay power consumption compared with several other beamforming approaches. © 2012 John Wiley & Sons, Ltd.
Resumo:
MOTIVATION: G protein-coupled receptors (GPCRs) play an important role in many physiological systems by transducing an extracellular signal into an intracellular response. Over 50% of all marketed drugs are targeted towards a GPCR. There is considerable interest in developing an algorithm that could effectively predict the function of a GPCR from its primary sequence. Such an algorithm is useful not only in identifying novel GPCR sequences but in characterizing the interrelationships between known GPCRs. RESULTS: An alignment-free approach to GPCR classification has been developed using techniques drawn from data mining and proteochemometrics. A dataset of over 8000 sequences was constructed to train the algorithm. This represents one of the largest GPCR datasets currently available. A predictive algorithm was developed based upon the simplest reasonable numerical representation of the protein's physicochemical properties. A selective top-down approach was developed, which used a hierarchical classifier to assign sequences to subdivisions within the GPCR hierarchy. The predictive performance of the algorithm was assessed against several standard data mining classifiers and further validated against Support Vector Machine-based GPCR prediction servers. The selective top-down approach achieves significantly higher accuracy than standard data mining methods in almost all cases.
Resumo:
Organisations have been approaching servitisation in an unstructured fashion. This is partially because there is insufficient understanding of the different types of Product-Service offerings. Therefore, a more detailed understanding of Product-Service types might advance the collective knowledge and assist organisations that are considering a servitisation strategy. Current models discuss specific aspects on the basis of few (or sometimes single) dimensions. In this paper, we develop a comprehensive model for classifying traditional and green Product-Service offerings, thus combining business and green offerings in a single model. We describe the model building process and its practical application in a case study. The model reveals the various traditional and green options available to companies and identifies how to compete between services; it allows servitisation positions to be identified such that a company may track its journey over time. Finally it fosters the introduction of innovative Product-Service Systems as promising business models to address environmental and social challenges. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
The G-protein coupled receptor (GPCR) superfamily fulfils various metabolic functions and interacts with a diverse range of ligands. There is a lack of sequence similarity between the six classes that comprise the GPCR superfamily. Moreover, most novel GPCRs found have low sequence similarity to other family members which makes it difficult to infer properties from related receptors. Many different approaches have been taken towards developing efficient and accurate methods for GPCR classification, ranging from motif-based systems to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of their amino acid sequences. This review describes the inherent difficulties in developing a GPCR classification algorithm and includes techniques previously employed in this area.
Resumo:
Purpose ‐ This study provides empirical evidence for the contextuality of marketing performance assessment (MPA) systems. It aims to introduce a taxonomical classification of MPA profiles based on the relative emphasis placed on different dimensions of marketing performance in different companies and business contexts. Design/methodology/approach ‐ The data used in this study (n=1,157) were collected using a web-based questionnaire, targeted to top managers in Finnish companies. Two multivariate data analysis techniques were used to address the research questions. First, dimensions of marketing performance underlying the current MPA systems were identified through factor analysis. Second, a taxonomy of different profiles of marketing performance measurement was created by clustering respondents based on the relative emphasis placed on the dimensions and characterizing them vis-á-vis contextual factors. Findings ‐ The study identifies nine broad dimensions of marketing performance that underlie the MPA systems in use and five MPA profiles typical of companies of varying sizes in varying industries, market life cycle stages, and competitive positions associated with varying levels of market orientation and business performance. The findings support the previously conceptual notion of contextuality in MPA and provide empirical evidence for the factors that affect MPA systems in practice. Originality/value ‐ The paper presents the first field study of current MPA systems focusing on combinations of metrics in use. The findings of the study provide empirical support for the contextuality of MPA and form a classification of existing contextual systems suitable for benchmarking purposes. Limited evidence for performance differences between MPA profiles is also provided.
Resumo:
Magnification can be provided to assist those with visual impairment to make the best use of remaining vision. Electronic transverse magnification of an object was first conceived for use in low vision in the late 1950s, but has developed slowly and is not extensively prescribed because of its relatively high cost and lack of portability. Electronic devices providing transverse magnification have been termed closed-circuit televisions (CCTVs) because of the direct cable link between the camera imaging system and monitor viewing system, but this description generally refers to surveillance devices and does not indicate the provision of features such as magnification and contrast enhancement. Therefore, the term Electronic Vision Enhancement Systems (EVES) is proposed to better distinguish and describe such devices. This paper reviews current knowledge on EVES for the visually impaired in terms of: classification; hardware and software (development of technology, magnification and field-of-view, contrast and image enhancement); user aspects (users and usage, reading speed and duration, and training); and potential future development of EVES. © 2003 The College of Optometrists.
Resumo:
When faced with the task of designing and implementing a new self-aware and self-expressive computing system, researchers and practitioners need a set of guidelines on how to use the concepts and foundations developed in the Engineering Proprioception in Computing Systems (EPiCS) project. This report provides such guidelines on how to design self-aware and self-expressive computing systems in a principled way. We have documented different categories of self-awareness and self-expression level using architectural patterns. We have also documented common architectural primitives, their possible candidate techniques and attributes for architecting self-aware and self-expressive systems. Drawing on the knowledge obtained from the previous investigations, we proposed a pattern driven methodology for engineering self-aware and self-expressive systems to assist in utilising the patterns and primitives during design. The methodology contains detailed guidance to make decisions with respect to the possible design alternatives, providing a systematic way to build self-aware and self-expressive systems. Then, we qualitatively and quantitatively evaluated the methodology using two case studies. The results reveal that our pattern driven methodology covers the main aspects of engineering self-aware and self-expressive systems, and that the resulted systems perform significantly better than the non-self-aware systems.
Resumo:
MOTIVATION: There is much interest in reducing the complexity inherent in the representation of the 20 standard amino acids within bioinformatics algorithms by developing a so-called reduced alphabet. Although there is no universally applicable residue grouping, there are numerous physiochemical criteria upon which one can base groupings. Local descriptors are a form of alignment-free analysis, the efficiency of which is dependent upon the correct selection of amino acid groupings. RESULTS: Within the context of G-protein coupled receptor (GPCR) classification, an optimization algorithm was developed, which was able to identify the most efficient grouping when used to generate local descriptors. The algorithm was inspired by the relatively new computational intelligence paradigm of artificial immune systems. A number of amino acid groupings produced by this algorithm were evaluated with respect to their ability to generate local descriptors capable of providing an accurate classification algorithm for GPCRs.
Resumo:
The paper suggests a classification of dynamic rule-based systems. For each class of systems, limit behavior is studied. Systems with stabilizing limit states or stabilizing limit trajectories are identified, and such states and trajectories are found. The structure of the set of limit states and trajectories is investigated.