32 resultados para knowledge management infrastructure

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recognition of the relevance of energy, especially of the renewable energies generated by the sun, water, wind, tides, modern biomass or thermal is growing significantly in the global society based on the possibility it has to improve societies′ quality of life, to support poverty reduction and sustainable development. Renewable energy, and mainly the energy generated by large hydropower generation projects that supply most of the renewable energy consumed by developing countries, requires many technical, legal, financial and social complex processes sustained by innovations and valuable knowledge. Besides these efforts, renewable energy requires a solid infrastructure to generate and distribute the energy resources needed to solve the basic needs of society. This demands a proper construction performance to deliver the energy projects planned according to specifications and respecting environmental and social concerns, which implies the observance of sustainable construction guidelines. But construction projects are complex and demanding and frequently face time and cost overruns that may cause negative impacts on the initial planning and thus on society. The renewable energy issue and the large renewable energy power generation and distribution projects are particularly significant for developing countries and for Latin America in particular, as this region concentrates an important hydropower potential and installed capacity. Using as references the performance of Venezuelan large hydropower generation projects and the Guri dam construction, this research evaluates the tight relationship existing between sustainable construction and knowledge management and their impact to achieve sustainability goals. The knowledge management processes are proposed as a basic strategy to allow learning from successes and failures obtained in previous projects and transform the enhancement opportunites into actions to improve the performance of the renewable energy power generation and distribution projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge management is critical for the success of virtual communities, especially in the case of distributed working groups. A representative example of this scenario is the distributed software development, where it is necessary an optimal coordination to avoid common problems such as duplicated work. In this paper the feasibility of using the workflow technology as a knowledge management system is discussed, and a practical use case is presented. This use case is an information system that has been deployed within a banking environment. It combines common workflow technology with a new conception of the interaction among participants through the extension of existing definition languages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In parallel to the effort of creating Open Linked Data for the World Wide Web there is a number of projects aimed for developing the same technologies but in the context of their usage in closed environments such as private enterprises. In the paper, we present results of research on interlinking structured data for use in Idea Management Systems - a still rare breed of knowledge management systems dedicated to innovation management. In our study, we show the process of extending an ontology that initially covers only the Idea Management System structure towards the concept of linking with distributed enterprise data and public data using Semantic Web technologies. Furthermore we point out how the established links can help to solve the key problems of contemporary Idea Management Systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, it is more and more important to develop competences in the learning process of the university students (that is to say, to acquire knowledge but also skills, abilities, attitudes and values). This is because professional practice requires that the future graduates design and market products, defend the interests of their clients, be introduced in the Administration or, even, in the Politics. Universities must form professionals that become social and opinion leaders, consultants, advisory, entrepreneurs and, in short, people with capacity to solve problems. This paper offers a tool to evaluate the application for the professor of different styles of management in the process of the student’s learning. Its main contribution consists on advancing toward the setting in practice of a model that overcomes the limitations of the traditional practices based on the masterful class, and that it has been applied in Portugal and Spain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes a particular knowledge acquisition tool for the construction and maintenance of the knowledge model of an intelligent system for emergency management in the field of hydrology. This tool has been developed following an innovative approach directed to end-users non familiarized in computer oriented terminology. According to this approach, the tool is conceived as a document processor specialized in a particular domain (hydrology) in such a way that the whole knowledge model is viewed by the user as an electronic document. The paper first describes the characteristics of the knowledge model of the intelligent system and summarizes the problems that we found during the development and maintenance of such type of model. Then, the paper describes the KATS tool, a software application that we have designed to help in this task to be used by users who are not experts in computer programming. Finally, the paper shows a comparison between KATS and other approaches for knowledge acquisition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Acquired Brain Injury (ABI), either caused by vascular or traumatic nature, is one of the most important causes for neurological disabilities. People who suffer ABI see how their quality of life decreases, due to the affection of one or some of the cognitive functions (memory, attention, language or executive functions). The traditional cognitive rehabilitation protocols are too expensive, so every help carried out in this area is justified. PREVIRNEC is a new platform for cognitive tele-rehabilitation that allows the neuropsychologist to schedule rehabilitation sessions consisted of specifically designed tasks, plus offering an additional way of communication between neuropsychologists and patients. Besides, the platform offers a knowledge management module that allows the optimization of the cognitive rehabilitation to this kind of patients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ontology quality can be affected by the difficulties involved in on-tology modelling which may imply the appearance of anomalies in ontologies. This situation leads to the need of validating ontologies, that is, assessing their quality and correctness. Ontology validation is a key activity in different ontol-ogy engineering scenarios such as development and selection. This paper con-tributes to the ontology validation activity by proposing a web-based tool, called OOPS!, independent of any ontology development environment, for de-tecting anomalies in ontologies. This tool will help developers to improve on-tology quality by automatically detecting potential errors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este trabajo de tesis se propone un esquema de votación telemática, de carácter paneuropeo y transnacional, que es capaz de satisfacer las más altas exigencias en materia de seguridad. Este enfoque transnacional supone una importante novedad que obliga a identificar a los ciudadanos más allá de las fronteras de su país, exigencia que se traduce en la necesidad de que todos los ciudadanos europeos dispongan de una identidad digital y en que ésta sea reconocida más allá de las fronteras de su país de origen. Bajo estas premisas, la propuesta recogida en esta tesis se aborda desde dos vertientes complementarias: por una parte, el diseño de un esquema de votación capaz de conquistar la confianza de gobiernos y ciudadanos europeos y, por otra, la búsqueda de una respuesta al problema de interoperabilidad de Sistemas de Gestión de Identidad (IDMs), en consonancia con los trabajos que actualmente realiza la UE para la integración de los servicios proporcionados por las Administraciones Públicas de los distintos países europeos. El punto de partida de este trabajo ha sido la identificación de los requisitos que determinan el adecuado funcionamiento de un sistema de votación telemática para, a partir de ellos,proponer un conjunto de elementos y criterios que permitan, por una parte, establecer comparaciones entre distintos sistemas telemáticos de votación y, por otra, evaluar la idoneidad del sistema propuesto. A continuación se han tomado las más recientes y significativas experiencias de votación telemática llevadas a cabo por diferentes países en la automatización de sus procesos electorales, analizándolas en profundidad para demostrar que, incluso en los sistemas más recientes, todavía subsisten importantes deficiencias relativas a la seguridad. Asimismo, se ha constatado que un sector importante de la población se muestra receloso y, a menudo, cuestiona la validez de los resultados publicados. Por tanto, un sistema que aspire a ganarse la confianza de ciudadanos y gobernantes no sólo debe operar correctamente, trasladando los procesos tradicionales de votación al contexto telemático, sino que debe proporcionar mecanismos adicionales que permitan superar los temores que inspira el nuevo sistema de votación. Conforme a este principio, el enfoque de esta tesis, se orienta, en primer lugar, hacia la creación de pruebas irrefutables, entendibles y auditables a lo largo de todo el proceso de votación, que permitan demostrar con certeza y ante todos los actores implicados en el proceso (gobierno, partidos políticos, votantes, Mesa Electoral, interventores, Junta Electoral,jueces, etc.) que los resultados publicados son fidedignos y que no se han violado los principios de anonimato y de “una persona, un voto”. Bajo este planteamiento, la solución recogida en esta tesis no sólo prevé mecanismos para minimizar el riesgo de compra de votos, sino que además incorpora mecanismos de seguridad robustos que permitirán no sólo detectar posibles intentos de manipulación del sistema, sino también identificar cuál ha sido el agente responsable. De forma adicional, esta tesis va más allá y traslada el escenario de votación a un ámbito paneuropeo donde aparecen nuevos problemas. En efecto, en la actualidad uno de los principales retos a los que se enfrentan las votaciones de carácter transnacional es sin duda la falta de procedimientos rigurosos y dinámicos para la actualización sincronizada de los censos de votantes de los distintos países que evite la presencia de errores que redunden en la incapacidad de controlar que una persona emita más de un voto, o que se vea impedido del todo a ejercer su derecho. Este reconocimiento de la identidad transnacional requiere la interoperabilidad entre los IDMs de los distintos países europeos. Para dar solución a este problema, esta tesis se apoya en las propuestas emergentes en el seno de la UE, que previsiblemente se consolidarán en los próximos años, tanto en materia de identidad digital (con la puesta en marcha de la Tarjeta de Ciudadano Europeo) como con el despliegue de una infraestructura de gestión de identidad que haga posible la interoperabilidad de los IDMs de los distintos estados miembros. A partir de ellas, en esta tesis se propone una infraestructura telemática que facilita la interoperabilidad de los sistemas de gestión de los censos de los distintos estados europeos en los que se lleve a cabo conjuntamente la votación. El resultado es un sistema versátil, seguro, totalmente robusto, fiable y auditable que puede ser aplicado en elecciones paneuropeas y que contempla la actualización dinámica del censo como una parte crítica del proceso de votación. ABSTRACT: This Ph. D. dissertation proposes a pan‐European and transnational system of telematic voting that is capable of meeting the strictest security standards. This transnational approach is a significant innovation that entails identifying citizens beyond the borders of their own country,thus requiring that all European citizens must have a digital identity that is recognized beyond the borders of their country of origin. Based on these premises, the proposal in this thesis is analyzed in two mutually‐reinforcing ways: first, a voting system is designed that is capable of winning the confidence of European governments and citizens and, second, a solution is conceived for the problem of interoperability of Identity Management Systems (IDMs) that is consistent with work being carried out by the EU to integrate the services provided by the public administrations of different European countries. The starting point of this paper is to identify the requirements for the adequate functioning of a telematic voting system and then to propose a set of elements and criteria that will allow for making comparisons between different such telematic voting systems for the purpose of evaluating the suitability of the proposed system. Then, this thesis provides an in‐depth analysis of most recent significant experiences in telematic voting carried out by different countries with the aim of automating electoral processes, and shows that even the most recent systems have significant shortcomings in the realm of security. Further, a significant portion of the population has shown itself to be wary,and they often question the validity of the published results. Therefore, a system that aspires to win the trust of citizens and leaders must not only operate correctly by transferring traditional voting processes into a telematic environment, but must also provide additional mechanisms that can overcome the fears aroused by the new voting system. Hence, this thesis focuses, first, on creating irrefutable, comprehensible and auditable proof throughout the voting process that can demonstrate to all actors in the process – the government, political parties, voters, polling station workers, electoral officials, judges, etc. ‐that the published results are accurate and that the principles of anonymity and one person,one vote, have not been violated. Accordingly, the solution in this thesis includes mechanisms to minimize the risk of vote buying, in addition to robust security mechanisms that can not only detect possible attempts to manipulate the system, but also identify the responsible party. Additionally, this thesis goes one step further and moves the voting scenario to a pan‐European scale, in which new problems appear. Indeed, one of the major challenges at present for transnational voting processes is the lack of rigorous and dynamic procedures for synchronized updating of different countries’ voter rolls, free from errors that may make the system unable to keep an individual from either casting more than one vote, or from losing the effective exercise of the right to vote. This recognition of transnational identity requires interoperability between the IDMs of different European countries. To solve the problem, this thesis relies on proposals emerging within the EU that are expected to take shape in the coming years, both in digital identity – with the launch of the European Citizen Card – and in the deployment of an identity management infrastructure that will enable interoperability of the IDMs of different member states. Based on these, the thesis proposes a telematic infrastructure that will achieve interoperability of the census management systems of European states in which voting processes are jointly carried out. The result is a versatile, secure, totally robust, reliable and auditable system that can be applied in pan‐European election, and that includes dynamic updating of the voter rolls as a critical part of the voting process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

At the present time almost all map libraries on the Internet are image collections generated by the digitization of early maps. This type of graphics files provides researchers with the possibility of accessing and visualizing historical cartographic information keeping in mind that this information has a degree of quality that depends upon elements such as the accuracy of the digitization process and proprietary constraints (e.g. visualization, resolution downloading options, copyright, use constraints). In most cases, access to these map libraries is useful only as a first approach and it is not possible to use those maps for scientific work due to the sparse tools available to measure, match, analyze and/or combine those resources with different kinds of cartography. This paper presents a method to enrich virtual map rooms and provide historians and other professional with a tool that let them to make the most of libraries in the digital era.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today?s knowledge management (KM) systems seldom account for language management and, especially, multilingual information processing. Document management is one of the strongest components of KM systems. If these systems do not include a multilingual knowledge management policy, intranet searches, excessive document space occupancy and redundant information slow down what are the most effective processes in a single language environment. In this paper, we model information flow from the sources of knowledge to the persons/systems searching for specific information. Within this framework, we focus on the importance of multilingual information processing, which is a hugely complex component of modern organizations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Cognitive skills training for minimally invasive surgery has traditionally relied upon diverse tools, such as seminars or lectures. Web technologies for e-learning have been adopted to provide ubiquitous training and serve as structured repositories for the vast amount of laparoscopic video sources available. However, these technologies fail to offer such features as formative and summative evaluation, guided learning, or collaborative interaction between users. Methodology: The "TELMA" environment is presented as a new technology-enhanced learning platform that increases the user's experience using a four-pillared architecture: (1) an authoring tool for the creation of didactic contents; (2) a learning content and knowledge management system that incorporates a modular and scalable system to capture, catalogue, search, and retrieve multimedia content; (3) an evaluation module that provides learning feedback to users; and (4) a professional network for collaborative learning between users. Face validation of the environment and the authoring tool are presented. Results: Face validation of TELMA reveals the positive perception of surgeons regarding the implementation of TELMA and their willingness to use it as a cognitive skills training tool. Preliminary validation data also reflect the importance of providing an easy-to-use, functional authoring tool to create didactic content. Conclusion: The TELMA environment is currently installed and used at the Jesús Usón Minimally Invasive Surgery Centre and several other Spanish hospitals. Face validation results ascertain the acceptance and usefulness of this new minimally invasive surgery training environment.