984 resultados para knowledge modeling
Resumo:
Pultrusion is an industrial process used to produce glass fibers reinforced polymers profiles. These materials are worldwide used when performing characteristics, such as great electrical and magnetic insulation, high strength to weight ratio, corrosion and weather resistance, long service life and minimal maintenance are required. In this study, we present the results of the modelling and simulation of heat flow through a pultrusion die by means of Finite Element Analysis (FEA). The numerical simulation was calibrated based on temperature profiles computed from thermographic measurements carried out during pultrusion manufacturing process. Obtained results have shown a maximum deviation of 7%, which is considered to be acceptable for this type of analysis, and is below to the 10% value, previously specified as maximum deviation. © 2011, Advanced Engineering Solutions.
Resumo:
This paper appears in International Journal of Projectics. Vol 4(1), pp. 39-49
Resumo:
Paper presented at the ECKM 2010 – 11th European Conference on Knowledge Management, 2-3 September, 2010, Famalicão, Portugal. URL: http://www.academic-conferences.org/eckm/eckm2010/eckm10-home.htm
Resumo:
This chapter appears in Innovations of Knowledge Management edited by Montano, D. Copyright 2004, IGI Global, www.igi-global.com. Posted by permission of the publisher.
Resumo:
This paper addresses the topic of knowledge management in multinational companies (MNCs). Its purpose is to examine the role of expatriates in knowledge acquisition and transfer within MNCs. Specifically it focuses on knowledge acquisition and transfer from one MNC head office located in Germany to two Portuguese subsidiaries as a basis for competitive advantage in their Portuguese subsidiaries. A qualitative research methodology is used, specifically through an exploratory case study approach, which examines how international assignments are important for the role of expatriates In knowledge acquisition and transfer between foreign head offices and their Portuguese subsidiaries. The data were collected through semi structured interviews to 10 Portuguese repatriates from two Portuguese subsidiaries of one foreign MNC. The findings suggest that the reasons that lead to expatriating employees from Portuguese subsidiaries to foreign head offices are connected to (1) knowledge management strategies to development the subsidiary’s performance; (2) new skills and knowledge acquisition by future team leaders and business/product managers in Portuguese subsidiaries; (3) procuring knowledge, from agents in head office, to be disseminated amongst co-workers in Portuguese subsidiaries; (4) acquiring global management skills, impossible to acquire locally and; (5) developing global projects within MNC. Also our results show that knowledge acquisition and transfer from foreign head office, through subsidiaries’ expatriates, contributes directly to the Portuguese subsidiaries’ innovation, improved performance, competitive advantage and growth in the economic sectors in which they operate. Moreover, evidence reveals that expatriation is seen as a strategy to fulfil some of the main organisational objectives through their expatriates (e.g., create new products and business markets, develop and incorporate new organisational techniques and processes, integrate global teams within multinational corporation with a responsibility on the definition of global objectives). The results obtained suggest that expatriates have a central role in acquiring and transferring strategic knowledge from MNC head office to their subsidiaries located in Portugal. Based on the findings, the paper discusses in detail the main theoretical and managerial implications. Suggestions for further research are also presented. The study’s main limitation is the small size of the sample, but its findings and methodology are quite original and significant.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
This paper proposes the concept of multi-asynchronous-channel for Petri nets. Petri nets extended with multi-asynchronous-channels and time-domains support the specification of distributed controllers, where each controller has a synchronous execution but the global system is asynchronous (globally-asynchronous locally-synchronous systems). Each multi-asynchronous-channel specify the interaction between two or more distributed controllers. These channels, together with the time-domain concept, ensure the creation of network-independent models to support implementations using heterogeneous communication networks. The created models support not only the systems documentation but also their validation and implementation through simulation tools, verification tools, and automatic code generators. An application example illustrates the use of a Petri net class extended with the proposed channels. © 2015 IEEE.
Resumo:
The purpose of this paper is to present a framework that increases knowledge sharing and collaboration in Higher Education Institutions. The paper discusses the concept of knowledge management in higher education institutions, presenting a systematization of knowledge practices and tools to linking people (students, teachers, researchers, secretariat staff, external entities)and promoting the knowledge sharing across several key processes and services in a higher education institution, such as: the research processes, learning processes, student and alumni services, administrative services and processes, and strategic planning and management. The framework purposed in this paper aims to improve knowledge practices and processes which facilitate an environment and a culture of knowledge collaboration,sharing and discovery that should characterize an institution of higher education.
Resumo:
This paper presents the results of an exploratory study on knowledge management in Portuguese organizations. The study was based on a survey sent to one hundred of the main Portuguese organizations, in order to know their current practices relating knowledge management systems (KMS) usage and intellectual capital (IC) measurement. With this study, we attempted to understand what are the main tools used to support KM processes and activities in the organizations, and what metrics are pointed by organizations to measure their knowledge assets.
Resumo:
The Evidence Accumulation Clustering (EAC) paradigm is a clustering ensemble method which derives a consensus partition from a collection of base clusterings obtained using different algorithms. It collects from the partitions in the ensemble a set of pairwise observations about the co-occurrence of objects in a same cluster and it uses these co-occurrence statistics to derive a similarity matrix, referred to as co-association matrix. The Probabilistic Evidence Accumulation for Clustering Ensembles (PEACE) algorithm is a principled approach for the extraction of a consensus clustering from the observations encoded in the co-association matrix based on a probabilistic model for the co-association matrix parameterized by the unknown assignments of objects to clusters. In this paper we extend the PEACE algorithm by deriving a consensus solution according to a MAP approach with Dirichlet priors defined for the unknown probabilistic cluster assignments. In particular, we study the positive regularization effect of Dirichlet priors on the final consensus solution with both synthetic and real benchmark data.
Resumo:
Over the last fifty years mobility practices have changed dramatically, improving the way travel takes place, the time it takes but also on matters like road safety and prevention. High mortality caused by high accident levels has reached untenable levels. But the research into road mortality stayed limited to comparative statistical exercises which go no further than defining accident types. In terms of sharing information and mapping accidents, little progress has been mad, aside from the normal publication of figures, either through simplistic tables or web pages. With considerable technological advances on geographical information technologies, research and development stayed rather static with only a few good examples on dynamic mapping. The use of Global Positioning System (GPS) devices as normal equipments on automobile industry resulted in a more dynamic mobility patterns but also with higher degrees of uncertainty on road traffic. This paper describes a road accident georeferencing project for the Lisbon District involving fatalities and serious injuries during 2007. In the initial phase, individual information summaries were compiled giving information on accidents and its majour characteristics, collected by the security forces: the Public Safety Police Force (Polícia de Segurança Pública - PSP) and the National Guard (Guarda Nacional Republicana - GNR). The Google Earth platform was used to georeference the information in order to inform the public and the authorities of the accident locations, the nature of the location, and the causes and consequences of the accidents. This paper also gives future insights about augmented reality technologies, considered crucial to advances to road safety and prevention studies. At the end, this exercise could be considered a success because of numerous consequences, as for stakeholders who decide what to do but also for the public awareness to the problem of road mortality.
Resumo:
The increasing importance of the integration of distributed generation and demand response in the power systems operation and planning, namely at lower voltage levels of distribution networks and in the competitive environment of electricity markets, leads us to the concept of smart grids. In both traditional and smart grid operation, non-technical losses are a great economic concern, which can be addressed. In this context, the ELECON project addresses the use of demand response contributions to the identification of non-technical losses. The present paper proposes a methodology to be used by Virtual Power Players (VPPs), which are entities able to aggregate distributed small-size resources, aiming to define the best electricity tariffs for several, clusters of consumers. A case study based on real consumption data demonstrates the application of the proposed methodology.
Resumo:
Dissertação apresentada à Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Doutor em Engenharia Civil
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
This paper presents the Realistic Scenarios Generator (RealScen), a tool that processes data from real electricity markets to generate realistic scenarios that enable the modeling of electricity market players’ characteristics and strategic behavior. The proposed tool provides significant advantages to the decision making process in an electricity market environment, especially when coupled with a multi-agent electricity markets simulator. The generation of realistic scenarios is performed using mechanisms for intelligent data analysis, which are based on artificial intelligence and data mining algorithms. These techniques allow the study of realistic scenarios, adapted to the existing markets, and improve the representation of market entities as software agents, enabling a detailed modeling of their profiles and strategies. This work contributes significantly to the understanding of the interactions between the entities acting in electricity markets by increasing the capability and realism of market simulations.