979 resultados para Knowledge extraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

O sector do turismo é uma área francamente em crescimento em Portugal e que tem desenvolvido a sua divulgação e estratégia de marketing. Contudo, apenas se prende com indicadores de desempenho e de oferta instalada (número de quartos, hotéis, voos, estadias), deixando os indicadores estatísticos em segundo plano. De acordo com o “ Travel & tourism Competitiveness Report 2013”, do World Economic Forum, classifica Portugal em 72º lugar no que respeita à qualidade e cobertura da informação estatística, disponível para o sector do Turismo. Refira-se que Espanha ocupa o 3º lugar. Uma estratégia de mercado, sem base analítica, que sustente um quadro de orientações específico e objetivo, com relevante conhecimento dos mercados alvo, dificilmente é compreensível ou até mesmo materializável. A implementação de uma estrutura de Business Intelligence que permita a realização de um levantamento e tratamento de dados que possibilite relacionar e sustentar os resultados obtidos no sector do turismo revela-se fundamental e crucial, para que sejam criadas estratégias de mercado. Essas estratégias são realizadas a partir da informação dos turistas que nos visitam, e dos potenciais turistas, para que possam ser cativados no futuro. A análise das características e dos padrões comportamentais dos turistas permite definir perfis distintos e assim detetar as tendências de mercado, de forma a promover a oferta dos produtos e serviços mais adequados. O conhecimento obtido permite, por um lado criar e disponibilizar os produtos mais atrativos para oferecer aos turistas e por outro informá-los, de uma forma direcionada, da existência desses produtos. Assim, a associação de uma recomendação personalizada que, com base no conhecimento de perfis do turista proceda ao aconselhamento dos melhores produtos, revela-se como uma ferramenta essencial na captação e expansão de mercado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main problems with Artificial Neural Networks (ANNs) is that their results are not intuitively clear. For example, commonly used hidden neurons with sigmoid activation function can approximate any continuous function, including linear functions, but the coefficients (weights) of this approximation are rather meaningless. To address this problem, current paper presents a novel kind of a neural network that uses transfer functions of various complexities in contrast to mono-transfer functions used in sigmoid and hyperbolic tangent networks. The presence of transfer functions of various complexities in a Mixed Transfer Functions Artificial Neural Network (MTFANN) allow easy conversion of the full model into user-friendly equation format (similar to that of linear regression) without any pruning or simplification of the model. At the same time, MTFANN maintains similar generalization ability to mono-transfer function networks in a global optimization context. The performance and knowledge extraction of MTFANN were evaluated on a realistic simulation of the Puma 560 robot arm and compared to sigmoid, hyperbolic tangent, linear and sinusoidal networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the big problems with Artificial Neural Networks (ANN) is that their results are not intuitively clear. For example, if we use the traditional neurons, with a sigmoid activation function, we can approximate any function, including linear functions, but the coefficients (weights) in this approximation will be rather meaningless. To resolve this problem, this paper presents a novel kind of ANN with different transfer functions mixed together. The aim of such a network is to i) obtain a better generalization than current networks ii) to obtain knowledge from the networks without a sophisticated knowledge extraction algorithm iii) to increase the understanding and acceptance of ANNs. Transfer Complexity Ratio is defined to make a sense of the weights associated with the network. The paper begins with a review of the knowledge extraction from ANNs and then presents a Mixed Transfer Function Artificial Neural Network (MTFANN). A MTFANN contains different transfer functions mixed together rather than mono-transfer functions. This mixed presence has helped to obtain high level knowledge and similar generalization comparatively to monotransfer function nets in a global optimization context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis develops a novel framework of nonlinear modelling to adaptively fit the complexity of the model to the problem domain resulting in a better modelling capability and a straightforward knowledge acquisition. The developed framework also permits increased comprehensibility and user acceptability of modelling results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much has been researched and discussed in the importance played by knowledge in organizations. We are witnessing the establishment of the knowledge economy, but this "new economy" brings in itself a whole complex system of metrics and evaluations, and cannot be dissociated from it. Due to its importance, the initiatives of knowledge management must be continually assessed on their progress in order to verify whether they are moving towards achieving the goals of success. Thus, good measurement practices should include not only how the organization quantifies its knowledge capital, but also how resources are allocated to supply their growth. Thinking about the aspects listed above, this paper presents an approach to a model for Knowledge extraction using an ERP system, suggesting the establishment of a set of indicators for assessing organizational performance. The objective is to evaluate the implementation of projects of knowledge management and thus observe the general development of the organization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of large amounts of data is better performed by humans when represented in a graphical format. Therefore, a new research area called the Visual Data Mining is being developed endeavoring to use the number crunching power of computers to prepare data for visualization, allied to the ability of humans to interpret data presented graphically.This work presents the results of applying a visual data mining tool, called FastMapDB to detect the behavioral pattern exhibited by a dataset of clinical information about hemoglobinopathies known as thalassemia. FastMapDB is a visual data mining tool that get tabular data stored in a relational database such as dates, numbers and texts, and by considering them as points in a multidimensional space, maps them to a three-dimensional space. The intuitive three-dimensional representation of objects enables a data analyst to see the behavior of the characteristics from abnormal forms of hemoglobin, highlighting the differences when compared to data from a group without alteration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes the work performed over the database of questions belonging to the different opinion polls carried during the last 50 years in Spain. Approximately half of the questions are provided with a title while the other half remain untitled. The work and implemented techniques in order to automatically generate the titles for untitled questions are described. This process is performed over very short texts and generated titles are subject to strong stylistic conventions and should be fully grammatical pieces of Spanish

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In large organizations the resources needed to solve challenging problems are typically dispersed over systems within and beyond the organization, and also in different media. However, there is still the need, in knowledge environments, for extraction methods able to combine evidence for a fact from across different media. In many cases the whole is more than the sum of its parts: only when considering the different media simultaneously can enough evidence be obtained to derive facts otherwise inaccessible to the knowledge worker via traditional methods that work on each single medium separately. In this paper, we present a cross-media knowledge extraction framework specifically designed to handle large volumes of documents composed of three types of media text, images and raw data and to exploit the evidence across the media. Our goal is to improve the quality and depth of automatically extracted knowledge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An approach for knowledge extraction from the information arriving to the knowledge base input and also new knowledge distribution over knowledge subsets already present in the knowledge base is developed. It is also necessary to realize the knowledge transform into parameters (data) of the model for the following decision-making on the given subset. It is assumed to realize the decision-making with the fuzzy sets’ apparatus.