951 resultados para Domain knowledge


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Versão editor: http://www.isegi.unl.pt/docentes/acorreia/documentos/European_Challenge_KM_Innovation_2004.pdf

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss how the inclusion of semantic functionalities in a Learning Objects Repository allows a better characterization of the learning materials enclosed and improves their retrieval through the adoption of some query expansion strategies. Thus, we started to regard the use of ontologies to automatically suggest additional concepts when users are filling some metadata fields and add new terms to the ones initially provided when users specify the keywords with interest in a query. Dealing with different domain areas and having considered impractical the development of many different ontologies, we adopted some strategies for reusing ontologies in order to have the knowledge necessary in our institutional repository. In this paper we make a review of the area of knowledge reuse and discuss our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper presented at the 8th European Conference on Knowledge Management, Barcelona, 6-7 Sep. 2008 URL: http://www.academic-conferences.org/eckm/eckm2007/eckm07-home.htm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter appears in Innovations of Knowledge Management edited by Montano, D. Copyright 2004, IGI Global, www.igi-global.com. Posted by permission of the publisher.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Eletrotécnica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few years many research efforts have been done to improve the design of ETL (Extract-Transform-Load) systems. ETL systems are considered very time-consuming, error-prone and complex involving several participants from different knowledge domains. ETL processes are one of the most important components of a data warehousing system that are strongly influenced by the complexity of business requirements, their changing and evolution. These aspects influence not only the structure of a data warehouse but also the structures of the data sources involved with. To minimize the negative impact of such variables, we propose the use of ETL patterns to build specific ETL packages. In this paper, we formalize this approach using BPMN (Business Process Modelling Language) for modelling more conceptual ETL workflows, mapping them to real execution primitives through the use of a domain-specific language that allows for the generation of specific instances that can be executed in an ETL commercial tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Entre los factores que contribuyen a predecir el rendimiento académico se pueden destacar aquellos que reflejan capacidades cognitivas (inteligencia, por ejemplo), y aquellas diferencias individuales consideradas como no-cognitivas (rasgos de personalidad, por ejemplo). En los últimos años, también se considera al Conocimiento General (CG) como un criterio para el éxito académico (ver Ackerman, 1997), ya que se ha evidenciado que el conocimiento previo ayuda en la adquisición de nuevo conocimiento (Hambrick & Engle, 2001). Uno de los objetivos de la psicología educacional consiste en identificar las principales variables que explican el rendimiento académico, como también proponer modelos teóricos que expliquen las relaciones existentes entre estas variables. El modelo teórico PPIK (Inteligencia-como-Proceso, Personalidad, Intereses e Inteligencia-como-Conocimiento) propuesto por Ackerman (1996) propone que el conocimiento y las destrezas adquiridas en un dominio en particular son el resultado de la dedicación de recursos cognitivos que una persona realiza durante un prolongado período de tiempo. Este modelo propone que los rasgos de personalidad, intereses individuales/vocacionales y aspectos motivacionales están integrados como rasgos complejos que determinan la dirección y la intensidad de la dedicación de recursos cognitivos sobre el aprendizaje que realiza una persona (Ackerman, 2003). En nuestro medio (Córdoba, Argentina), un grupo de investigadores ha desarrollado una serie de recursos técnicos necesarios para la evaluación de algunos de los constructos propuesto por este modelo. Sin embargo, por el momento no contamos con una medida de Conocimiento General. Por lo tanto, en el presente proyecto se propone la construcción de un instrumento para medir Conocimiento General (CG), indispensable para poder contar con una herramienta que permita establecer parámetros sobre el nivel de conocimiento de la población universitaria y para en próximos trabajos poner a prueba los postulados de la teoría PPIK (Ackerman, 1996). Between the factors that contribute to predict the academic achievement, may be featured those who reflect cognitive capacities (i.g. intelligence) and those who reflect individual differences that are considered like non-cognitive (i.g. personality traits). In the last years, also the General Knowledge has been considered like a criterion for the academic successfully (see Ackerman, 1997), since it has been shown that the previous knowledge helps in the acquisition of the new knowledge (Hambrick & Engle, 2001). An interesting theoretical model that has proposed an explanation for the academic achievement, is the PPIK (intelligence like a process, interests and inteligence like knowledge) proposed by Ackerman (1996), who argues that knowledge and the acquired skills in a particular domain are the result of the dedication of cognitive resources that a person perform during a long period of time. This model proposes that personality traits, individuals interests and motivational aspects are integrated as complex traits that determine the direction and the intensity of the dedication of cognitive resources on the learning that a person make (Ackerman, 2003). In our context, (Córdoba, Argentina), a group of researcher has developed a series of necessary technical resoures for the assesment of some of the theoretical constructs proposed by this model. However, by the moment, we do not have an instrument for evaluate the General Knowledge. Therefore, this project aims the construction of an instrument to asess General Knowledge, essential to set parameters on the knowledge level of the university population and for in next works test the PPIK theory postulates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent developments in high magnetic field 13C magnetic resonance spectroscopy with improved localization and shimming techniques have led to important gains in sensitivity and spectral resolution of 13C in vivo spectra in the rodent brain, enabling the separation of several 13C isotopomers of glutamate and glutamine. In this context, the assumptions used in spectral quantification might have a significant impact on the determination of the 13C concentrations and the related metabolic fluxes. In this study, the time domain spectral quantification algorithm AMARES (advanced method for accurate, robust and efficient spectral fitting) was applied to 13 C magnetic resonance spectroscopy spectra acquired in the rat brain at 9.4 T, following infusion of [1,6-(13)C2 ] glucose. Using both Monte Carlo simulations and in vivo data, the goal of this work was: (1) to validate the quantification of in vivo 13C isotopomers using AMARES; (2) to assess the impact of the prior knowledge on the quantification of in vivo 13C isotopomers using AMARES; (3) to compare AMARES and LCModel (linear combination of model spectra) for the quantification of in vivo 13C spectra. AMARES led to accurate and reliable 13C spectral quantification similar to those obtained using LCModel, when the frequency shifts, J-coupling constants and phase patterns of the different 13C isotopomers were included as prior knowledge in the analysis.