899 resultados para Expert System. Rule-based System. Inference Engine. Rules. Alarm Management. Alarm filtering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La creación de conocimiento al interior de las organizaciones es visible mediante la dirección adecuada del conocimiento de los individuos, sin embargo, cada individuo debe interactuar de tal manera que forme una red o sistema de conocimiento organizacional que consolide a largo plazo las empresas en el entorno en el que se desenvuelven. Este documento revisa elementos centrales acerca de la gestión de conocimiento visto desde varios autores y perspectivas e identifica puntos clave para diseñar un modelo de gestión de conocimiento para una empresa del sector de insumos químicos para la industria farmacéutica, cosmética y de alimentos de la ciudad de Bogotá.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Siguiendo un marco teórico integrado por varios autores entorno a los sistemas de control de gestión a lo largo de varias décadas, este trabajo pretende estudiar y contrastar la relación entre el desarrollo de dichos sistemas y los recursos y capacidades. Para tal fin, se desarrolló un estudio de caso en Teleperformance Colombia (TC), una empresa dedicada a prestación de servicio de tercerización de procesos o business process outsourcing. En el estudio se establecieron dos variables para evaluar el desarrollo de sistema de control de gestión: el diseño y el uso. A su vez, para cada uno de ellos, se definieron los indicadores y preguntas que permitieran realizar la observación y posterior análisis. De igual manera, se seleccionaron los recursos y capacidades más importantes para el desarrollo del negocio: innovación, aprendizaje organizacional y capital humano. Sobre estos se validó la existencia de relación con el SCG implementado en TC. La información obtenida fue analizada y contrastada a través de pruebas estadísticas ampliamente utilizadas en este tipo de estudios en las ciencias sociales. Finalmente, se analizaron seis posibles relaciones de las cuales, solamente se ratificó el relacionamiento positivo entre uso de sistema de control gestión y el recurso y capacidad capital humano. El resto de relacionamientos, refutaron los planteamientos teóricos que establecían cierta influencia de los sistemas de control de gestión sobre recursos y capacidades de innovación y aprendizaje organizacional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La implantació de Sistemes de Suport a la presa de Decisions (SSD) en Estacions Depuradores d'Aigües Residuals Urbanes (EDAR) facilita l'aplicació de tècniques més eficients basades en el coneixement per a la gestió del procés, assegurant la qualitat de l'aigua de sortida tot minimitzant el cost ambiental de la seva explotació. Els sistemes basats en el coneixement es caracteritzen per la seva capacitat de treballar amb dominis molt poc estructurats, i gran part de la informació rellevant de tipus qualitatiu i/o incerta. Precisament aquests són els trets característics que es poden trobar en els sistemes biològics de depuració, i en conseqüència en una EDAR. No obstant, l'elevada complexitat dels SSD fa molt costós el seu disseny, desenvolupament i aplicació en planta real, pel que resulta determinant la generació d'un protocol que faciliti la seva exportació a EDARs de tecnologia similar. L'objectiu del present treball de Tesi és precisament el desenvolupament d'un protocol que faciliti l'exportació sistemàtica de SSD i l'aprofitament del coneixement del procés prèviament adquirit. El treball es desenvolupa en base al cas d'estudi resultant de l'exportació a l'EDAR Montornès del prototipus original de SSD implementat a l'EDAR Granollers. Aquest SSD integra dos tipus de sistemes basats en el coneixement, concretament els sistemes basats en regles (els quals són programes informàtics que emulen el raonament humà i la seva capacitat de solucionar problemes utilitzant les mateixes fonts d'informació) i els sistemes de raonament basats en casos (els quals són programes informàtics basats en el coneixement que volen solucionar les situacions anormals que pateix la planta en el moment actual mitjançant el record de l'acció efectuada en una situació passada similar). El treball està estructurat en diferents capítols, en el primer dels quals, el lector s'introdueix en el món dels sistemes de suport a la decisió i en el domini de la depuració d'aigües. Seguidament es fixen els objectius i es descriuen els materials i mètodes utilitzats. A continuació es presenta el prototipus de SSD desenvolupat per la EDAR Granollers. Una vegada el prototipus ha estat presentat es descriu el primer protocol plantejat pel mateix autor de la Tesi en el seu Treball de Recerca. A continuació es presenten els resultats obtinguts en l'aplicació pràctica del protocol per generar un nou SSD, per una planta depuradora diferent, partint del prototipus. L'aplicació pràctica del protocol permet l'evolució del mateix cap a un millor pla d'exportació. Finalment, es pot concloure que el nou protocol redueix el temps necessari per realitzar el procés d'exportació, tot i que el nombre de passos necessaris ha augmentat, la qual cosa significa que el nou protocol és més sistemàtic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dual-system models suggest that English past tense morphology involves two processing routes: rule application for regular verbs and memory retrieval for irregular verbs (Pinker, 1999). In second language (L2) processing research, Ullman (2001a) suggested that both verb types are retrieved from memory, but more recently Clahsen and Felser (2006) and Ullman (2004) argued that past tense rule application can be automatised with experience by L2 learners. To address this controversy, we tested highly proficient Greek-English learners with naturalistic or classroom L2 exposure compared to native English speakers in a self-paced reading task involving past tense forms embedded in plausible sentences. Our results suggest that, irrespective to the type of exposure, proficient L2 learners of extended L2 exposure apply rule-based processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports an expert system (SISTEMAT) developed for structural determination of diverse chemical classes of natural products, including lignans, based mainly on 13C NMR and 1H NMR data of these compounds. The system is composed of five programs that analyze specific data of a lignan and shows a skeleton probability for the compound. At the end of analyses, the results are grouped, the global probability is computed, and the most probable skeleton is exhibited to the user. SISTEMAT was able to properly predict the skeletons of 80% of the 30 lignans tested, demonstrating its advantage during the structural elucidation course in a short period of time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work was to design a set of rules for levodopa infusion dose adjustment in Parkinson’s disease based on a simulation experiments. Using this simulator, optimal infusions dose in different conditions were calculated. There are seven conditions (-3 to +3)appearing in a rating scale for Parkinson’s disease patients. By finding mean of the differences between conditions and optimal dose, two sets of rules were designed. The set of rules was optimized by several testing. Usefulness for optimizing the titration procedure of new infusion patients based on rule-based reasoning was investigated. Results show that both of the number of the steps and the errors for finding optimal dose was shorten by new rules. At last, the dose predicted with new rules well on each single occasion of majority of patients in simulation experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the last decade the problem of surface inspection has been receiving great attention from the scientific community, the quality control and the maintenance of products are key points in several industrial applications.The railway associations spent much money to check the railway infrastructure. The railway infrastructure is a particular field in which the periodical surface inspection can help the operator to prevent critical situations. The maintenance and monitoring of this infrastructure is an important aspect for railway association.That is why the surface inspection of railway also makes importance to the railroad authority to investigate track components, identify problems and finding out the way that how to solve these problems. In railway industry, usually the problems find in railway sleepers, overhead, fastener, rail head, switching and crossing and in ballast section as well. In this thesis work, I have reviewed some research papers based on AI techniques together with NDT techniques which are able to collect data from the test object without making any damage. The research works which I have reviewed and demonstrated that by adopting the AI based system, it is almost possible to solve all the problems and this system is very much reliable and efficient for diagnose problems of this transportation domain. I have reviewed solutions provided by different companies based on AI techniques, their products and reviewed some white papers provided by some of those companies. AI based techniques likemachine vision, stereo vision, laser based techniques and neural network are used in most cases to solve the problems which are performed by the railway engineers.The problems in railway handled by the AI based techniques performed by NDT approach which is a very broad, interdisciplinary field that plays a critical role in assuring that structural components and systems perform their function in a reliable and cost effective fashion. The NDT approach ensures the uniformity, quality and serviceability of materials without causing any damage of that materials is being tested. This testing methods use some way to test product like, Visual and Optical testing, Radiography, Magnetic particle testing, Ultrasonic testing, Penetrate testing, electro mechanic testing and acoustic emission testing etc. The inspection procedure has done periodically because of better maintenance. This inspection procedure done by the railway engineers manually with the aid of AI based techniques.The main idea of thesis work is to demonstrate how the problems can be reduced of thistransportation area based on the works done by different researchers and companies. And I have also provided some ideas and comments according to those works and trying to provide some proposal to use better inspection method where it is needed.The scope of this thesis work is automatic interpretation of data from NDT, with the goal of detecting flaws accurately and efficiently. AI techniques such as neural networks, machine vision, knowledge-based systems and fuzzy logic were applied to a wide spectrum of problems in this area. Another scope is to provide an insight into possible research methods concerning railway sleeper, fastener, ballast and overhead inspection by automatic interpretation of data.In this thesis work, I have discussed about problems which are arise in railway sleepers,fastener, and overhead and ballasted track. For this reason I have reviewed some research papers related with these areas and demonstrated how their systems works and the results of those systems. After all the demonstrations were taking place of the advantages of using AI techniques in contrast with those manual systems exist previously.This work aims to summarize the findings of a large number of research papers deploying artificial intelligence (AI) techniques for the automatic interpretation of data from nondestructive testing (NDT). Problems in rail transport domain are mainly discussed in this work. The overall work of this paper goes to the inspection of railway sleepers, fastener, ballast and overhead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some 50% of the people in the world live in rural areas, often under harsh conditions and in poverty. The need for knowledge of how to improve living conditions is well documented. In response to this need, new knowledge of how to improve living conditions in rural areas and elsewhere is continuously being developed by researchers and practitioners around the world. People in rural areas, in particular, would certainly benefit from being able to share relevant knowledge with each other, as well as with stakeholders (e.g. researchers) and other organizations (e.g. NGOs). Central to knowledge management is the idea of knowledge sharing. This study is based on the assumption that knowledge management can support sustainable development in rural and remote regions. It aims to present a framework for knowledge management in sustainable rural development, and an inventory of existing frameworks for that. The study is interpretive, with interviews as the primary source for the inventory of stakeholders, knowledge categories and Information and Communications Technology (ICT) infrastructure. For the inventory of frameworks, a literature study was carried out. The result is a categorization of the stakeholders who act as producers and beneficiaries of explicit and indigenous development knowledge. Stakeholders are local government, local population, academia, NGOs, civil society and donor agencies. Furthermore, the study presents a categorization of the development knowledge produced by the stakeholders together with specifications for the existing ICT infrastructure. Rural development categories found are research, funding, agriculture, ICT, gender, institutional development, local infrastructure development, and marketing & enterprise. Finally, a compiled framework is presented, and it is based on ten existing frameworks for rural development that were found in the literature study, and the empirical findings of the Gilgit-Baltistan case. Our proposed framework is divided in four levels where level one consists of the identified stakeholders, level two consists of rural development categories, level three of the knowledge management system and level four of sustainable rural development based on the levels below. In the proposed framework we claim that the sustainability of rural development can be achieved through a knowledge society in which knowledge of the rural development process is shared among all relevant stakeholders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.