868 resultados para Rule-based techniques
Resumo:
Feature selection and feature weighting are useful techniques for improving the classification accuracy of K-nearest-neighbor (K-NN) rule. The term feature selection refers to algorithms that select the best subset of the input feature set. In feature weighting, each feature is multiplied by a weight value proportional to the ability of the feature to distinguish pattern classes. In this paper, a novel hybrid approach is proposed for simultaneous feature selection and feature weighting of K-NN rule based on Tabu Search (TS) heuristic. The proposed TS heuristic in combination with K-NN classifier is compared with several classifiers on various available data sets. The results have indicated a significant improvement in the performance in classification accuracy. The proposed TS heuristic is also compared with various feature selection algorithms. Experiments performed revealed that the proposed hybrid TS heuristic is superior to both simple TS and sequential search algorithms. We also present results for the classification of prostate cancer using multispectral images, an important problem in biomedicine.
Resumo:
Strasheela provides a means for the composer to create a symbolic score by formally describing it in a rule-based way. The environment defines a rich music representation for complex polyphonic scores. Strasheela enables the user to define expressive compositional rules and then to apply them to the score. Compositional rules can restrict many aspects of the music - including the rhythmic structure, the melodic structure and the harmonic structure - by constraining the parameters (e.g. duration or pitch) of musical events according to some numerical or logical relation. Strasheela combines this expressivity with efficient search strategies.
Resumo:
Accurate in silico models for the quantitative prediction of the activity of G protein-coupled receptor (GPCR) ligands would greatly facilitate the process of drug discovery and development. Several methodologies have been developed based on the properties of the ligands, the direct study of the receptor-ligand interactions, or a combination of both approaches. Ligand-based three-dimensional quantitative structure-activity relationships (3D-QSAR) techniques, not requiring knowledge of the receptor structure, have been historically the first to be applied to the prediction of the activity of GPCR ligands. They are generally endowed with robustness and good ranking ability; however they are highly dependent on training sets. Structure-based techniques generally do not provide the level of accuracy necessary to yield meaningful rankings when applied to GPCR homology models. However, they are essentially independent from training sets and have a sufficient level of accuracy to allow an effective discrimination between binders and nonbinders, thus qualifying as viable lead discovery tools. The combination of ligand and structure-based methodologies in the form of receptor-based 3D-QSAR and ligand and structure-based consensus models results in robust and accurate quantitative predictions. The contribution of the structure-based component to these combined approaches is expected to become more substantial and effective in the future, as more sophisticated scoring functions are developed and more detailed structural information on GPCRs is gathered.
Resumo:
Developing a desirable framework for handling inconsistencies in software requirements specifications is a challenging problem. It has been widely recognized that the relative priority of requirements can help developers to make some necessary trade-off decisions for resolving con- flicts. However, for most distributed development such as viewpoints-based approaches, different stakeholders may assign different levels of priority to the same shared requirements statement from their own perspectives. The disagreement in the local levels of priority assigned to the same shared requirements statement often puts developers into a dilemma during the inconsistency handling process. The main contribution of this paper is to present a prioritized merging-based framework for handling inconsistency in distributed software requirements specifications. Given a set of distributed inconsistent requirements collections with the local prioritization, we first construct a requirements specification with a prioritization from an overall perspective. We provide two approaches to constructing a requirements specification with the global prioritization, including a merging-based construction and a priority vector-based construction. Following this, we derive proposals for handling inconsistencies from the globally prioritized requirements specification in terms of prioritized merging. Moreover, from the overall perspective, these proposals may be viewed as the most appropriate to modifying the given inconsistent requirements specification in the sense of the ordering relation over all the consistent subsets of the requirements specification. Finally, we consider applying negotiation-based techniques to viewpoints so as to identify an acceptable common proposal from these proposals.
Resumo:
This research aims to use the multivariate geochemical dataset, generated by the Tellus project, to investigate the appropriate use of transformation methods to maintain the integrity of geochemical data and inherent constrained behaviour in multivariate relationships. The widely used normal score transform is compared with the use of a stepwise conditional transform technique. The Tellus Project, managed by GSNI and funded by the Department of Enterprise Trade and Development and the EU’s Building Sustainable Prosperity Fund, involves the most comprehensive geological mapping project ever undertaken in Northern Ireland. Previous study has demonstrated spatial variability in the Tellus data but geostatistical analysis and interpretation of the datasets requires use of an appropriate methodology that reproduces the inherently complex multivariate relations. Previous investigation of the Tellus geochemical data has included use of Gaussian-based techniques. However, earth science variables are rarely Gaussian, hence transformation of data is integral to the approach. The multivariate geochemical dataset generated by the Tellus project provides an opportunity to investigate the appropriate use of transformation methods, as required for Gaussian-based geostatistical analysis. In particular, the stepwise conditional transform is investigated and developed for the geochemical datasets obtained as part of the Tellus project. The transform is applied to four variables in a bivariate nested fashion due to the limited availability of data. Simulation of these transformed variables is then carried out, along with a corresponding back transformation to original units. Results show that the stepwise transform is successful in reproducing both univariate statistics and the complex bivariate relations exhibited by the data. Greater fidelity to multivariate relationships will improve uncertainty models, which are required for consequent geological, environmental and economic inferences.
Resumo:
Increased complexity and interconnectivity of Supervisory Control and Data Acquisition (SCADA) systems in Smart Grids potentially means greater susceptibility to malicious attackers. SCADA systems with legacy communication infrastructure have inherent cyber-security vulnerabilities as these systems were originally designed with little consideration of cyber threats. In order to improve cyber-security of SCADA networks, this paper presents a rule-based Intrusion Detection System (IDS) using a Deep Packet Inspection (DPI) method, which includes signature-based and model-based approaches tailored for SCADA systems. The proposed signature-based rules can accurately detect several known suspicious or malicious attacks. In addition, model-based detection is proposed as a complementary method to detect unknown attacks. Finally, proposed intrusion detection approaches for SCADA networks are implemented and verified via Snort rules.
Resumo:
La scoliose idiopathique de l’adolescent (SIA) est une déformation tri-dimensionelle du rachis. Son traitement comprend l’observation, l’utilisation de corsets pour limiter sa progression ou la chirurgie pour corriger la déformation squelettique et cesser sa progression. Le traitement chirurgical reste controversé au niveau des indications, mais aussi de la chirurgie à entreprendre. Malgré la présence de classifications pour guider le traitement de la SIA, une variabilité dans la stratégie opératoire intra et inter-observateur a été décrite dans la littérature. Cette variabilité s’accentue d’autant plus avec l’évolution des techniques chirurgicales et de l’instrumentation disponible. L’avancement de la technologie et son intégration dans le milieu médical a mené à l’utilisation d’algorithmes d’intelligence artificielle informatiques pour aider la classification et l’évaluation tridimensionnelle de la scoliose. Certains algorithmes ont démontré être efficace pour diminuer la variabilité dans la classification de la scoliose et pour guider le traitement. L’objectif général de cette thèse est de développer une application utilisant des outils d’intelligence artificielle pour intégrer les données d’un nouveau patient et les évidences disponibles dans la littérature pour guider le traitement chirurgical de la SIA. Pour cela une revue de la littérature sur les applications existantes dans l’évaluation de la SIA fut entreprise pour rassembler les éléments qui permettraient la mise en place d’une application efficace et acceptée dans le milieu clinique. Cette revue de la littérature nous a permis de réaliser que l’existence de “black box” dans les applications développées est une limitation pour l’intégration clinique ou la justification basée sur les évidence est essentielle. Dans une première étude nous avons développé un arbre décisionnel de classification de la scoliose idiopathique basé sur la classification de Lenke qui est la plus communément utilisée de nos jours mais a été critiquée pour sa complexité et la variabilité inter et intra-observateur. Cet arbre décisionnel a démontré qu’il permet d’augmenter la précision de classification proportionnellement au temps passé à classifier et ce indépendamment du niveau de connaissance sur la SIA. Dans une deuxième étude, un algorithme de stratégies chirurgicales basé sur des règles extraites de la littérature a été développé pour guider les chirurgiens dans la sélection de l’approche et les niveaux de fusion pour la SIA. Lorsque cet algorithme est appliqué à une large base de donnée de 1556 cas de SIA, il est capable de proposer une stratégie opératoire similaire à celle d’un chirurgien expert dans prêt de 70% des cas. Cette étude a confirmé la possibilité d’extraire des stratégies opératoires valides à l’aide d’un arbre décisionnel utilisant des règles extraites de la littérature. Dans une troisième étude, la classification de 1776 patients avec la SIA à l’aide d’une carte de Kohonen, un type de réseaux de neurone a permis de démontrer qu’il existe des scoliose typiques (scoliose à courbes uniques ou double thoracique) pour lesquelles la variabilité dans le traitement chirurgical varie peu des recommandations par la classification de Lenke tandis que les scolioses a courbes multiples ou tangentielles à deux groupes de courbes typiques étaient celles avec le plus de variation dans la stratégie opératoire. Finalement, une plateforme logicielle a été développée intégrant chacune des études ci-dessus. Cette interface logicielle permet l’entrée de données radiologiques pour un patient scoliotique, classifie la SIA à l’aide de l’arbre décisionnel de classification et suggère une approche chirurgicale basée sur l’arbre décisionnel de stratégies opératoires. Une analyse de la correction post-opératoire obtenue démontre une tendance, bien que non-statistiquement significative, à une meilleure balance chez les patients opérés suivant la stratégie recommandée par la plateforme logicielle que ceux aillant un traitement différent. Les études exposées dans cette thèse soulignent que l’utilisation d’algorithmes d’intelligence artificielle dans la classification et l’élaboration de stratégies opératoires de la SIA peuvent être intégrées dans une plateforme logicielle et pourraient assister les chirurgiens dans leur planification préopératoire.
Resumo:
The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.
Resumo:
This thesis summarizes the results on the studies on a syntax based approach for translation between Malayalam, one of Dravidian languages and English and also on the development of the major modules in building a prototype machine translation system from Malayalam to English. The development of the system is a pioneering effort in Malayalam language unattempted by previous researchers. The computational models chosen for the system is first of its kind for Malayalam language. An in depth study has been carried out in the design of the computational models and data structures needed for different modules: morphological analyzer , a parser, a syntactic structure transfer module and target language sentence generator required for the prototype system. The generation of list of part of speech tags, chunk tags and the hierarchical dependencies among the chunks required for the translation process also has been done. In the development process, the major goals are: (a) accuracy of translation (b) speed and (c) space. Accuracy-wise, smart tools for handling transfer grammar and translation standards including equivalent words, expressions, phrases and styles in the target language are to be developed. The grammar should be optimized with a view to obtaining a single correct parse and hence a single translated output. Speed-wise, innovative use of corpus analysis, efficient parsing algorithm, design of efficient Data Structure and run-time frequency-based rearrangement of the grammar which substantially reduces the parsing and generation time are required. The space requirement also has to be minimised
Resumo:
Cancer treatment is most effective when it is detected early and the progress in treatment will be closely related to the ability to reduce the proportion of misses in the cancer detection task. The effectiveness of algorithms for detecting cancers can be greatly increased if these algorithms work synergistically with those for characterizing normal mammograms. This research work combines computerized image analysis techniques and neural networks to separate out some fraction of the normal mammograms with extremely high reliability, based on normal tissue identification and removal. The presence of clustered microcalcifications is one of the most important and sometimes the only sign of cancer on a mammogram. 60% to 70% of non-palpable breast carcinoma demonstrates microcalcifications on mammograms [44], [45], [46].WT based techniques are applied on the remaining mammograms, those are obviously abnormal, to detect possible microcalcifications. The goal of this work is to improve the detection performance and throughput of screening-mammography, thus providing a ‘second opinion ‘ to the radiologists. The state-of- the- art DWT computation algorithms are not suitable for practical applications with memory and delay constraints, as it is not a block transfonn. Hence in this work, the development of a Block DWT (BDWT) computational structure having low processing memory requirement has also been taken up.
Resumo:
During the past few years, there has been much discussion of a shift from rule-based systems to principle-based systems for natural language processing. This paper outlines the major computational advantages of principle-based parsing, its differences from the usual rule-based approach, and surveys several existing principle-based parsing systems used for handling languages as diverse as Warlpiri, English, and Spanish, as well as language translation.
Resumo:
Free-word order languages have long posed significant problems for standard parsing algorithms. This thesis presents an implemented parser, based on Government-Binding (GB) theory, for a particular free-word order language, Warlpiri, an aboriginal language of central Australia. The words in a sentence of a free-word order language may swap about relatively freely with little effect on meaning: the permutations of a sentence mean essentially the same thing. It is assumed that this similarity in meaning is directly reflected in the syntax. The parser presented here properly processes free word order because it assigns the same syntactic structure to the permutations of a single sentence. The parser also handles fixed word order, as well as other phenomena. On the view presented here, there is no such thing as a "configurational" or "non-configurational" language. Rather, there is a spectrum of languages that are more or less ordered. The operation of this parsing system is quite different in character from that of more traditional rule-based parsing systems, e.g., context-free parsers. In this system, parsing is carried out via the construction of two different structures, one encoding precedence information and one encoding hierarchical information. This bipartite representation is the key to handling both free- and fixed-order phenomena. This thesis first presents an overview of the portion of Warlpiri that can be parsed. Following this is a description of the linguistic theory on which the parser is based. The chapter after that describes the representations and algorithms of the parser. In conclusion, the parser is compared to related work. The appendix contains a substantial list of test cases ??th grammatical and ungrammatical ??at the parser has actually processed.
Resumo:
The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.
Resumo:
La present tesi pretén recollir l'experiència viscuda en desenvolupar un sistema supervisor intel·ligent per a la millora de la gestió de plantes depuradores d'aigües residuals., implementar-lo en planta real (EDAR Granollers) i avaluar-ne el funcionament dia a dia amb situacions típiques de la planta. Aquest sistema supervisor combina i integra eines de control clàssic de les plantes depuradores (controlador automàtic del nivell d'oxigen dissolt al reactor biològic, ús de models descriptius del procés...) amb l'aplicació d'eines del camp de la intel·ligència artificial (sistemes basats en el coneixement, concretament sistemes experts i sistemes basats en casos, i xarxes neuronals). Aquest document s'estructura en 9 capítols diferents. Hi ha una primera part introductòria on es fa una revisió de l'estat actual del control de les EDARs i s'explica el perquè de la complexitat de la gestió d'aquests processos (capítol 1). Aquest capítol introductori juntament amb el capítol 2, on es pretén explicar els antecedents d'aquesta tesi, serveixen per establir els objectius d'aquest treball (capítol 3). A continuació, el capítol 4 descriu les peculiaritats i especificitats de la planta que s'ha escollit per implementar el sistema supervisor. Els capítols 5 i 6 del present document exposen el treball fet per a desenvolupar el sistema basat en regles o sistema expert (capítol 6) i el sistema basat en casos (capítol 7). El capítol 8 descriu la integració d'aquestes dues eines de raonament en una arquitectura multi nivell distribuïda. Finalment, hi ha una darrer capítol que correspon a la avaluació (verificació i validació), en primer lloc, de cadascuna de les eines per separat i, posteriorment, del sistema global en front de situacions reals que es donin a la depuradora
Resumo:
The activated sludge and anaerobic digestion processes have been modelled in widely accepted models. Nevertheless, these models still have limitations when describing operational problems of microbiological origin. The aim of this thesis is to develop a knowledge-based model to simulate risk of plant-wide operational problems of microbiological origin.For the risk model heuristic knowledge from experts and literature was implemented in a rule-based system. Using fuzzy logic, the system can infer a risk index for the main operational problems of microbiological origin (i.e. filamentous bulking, biological foaming, rising sludge and deflocculation). To show the results of the risk model, it was implemented in the Benchmark Simulation Models. This allowed to study the risk model's response in different scenarios and control strategies. The risk model has shown to be really useful providing a third criterion to evaluate control strategies apart from the economical and environmental criteria.