26 resultados para Knowledge Process
em Universidad Politécnica de Madrid
Resumo:
This paper presents the results of the analysis focused on scientific-technological KT in four Mexican firms and carried out by the case study approach. The analysis highlights the use of KT mechanisms as a means to obtain scientific-technological knowledge, learning, building S&T capabilities, and achieve the results of the R&D and innovation by firms.
Resumo:
This paper introduces the experience of using videoconferencing and recording as a mechanism to support courses which need to be promoted or discontinued within the framework of the European convergence process. Our objective is to make these courses accessible as live streaming during the lessons as well as recorded lectures and associated documents available to the students as soon as the lesson has finished. The technology used has been developed in our university and it is all open source. Although this is a technical project the key is the human factor involved. The people managing the virtual sessions are students of the courses being recorded. However, they lack technical knowledge, so we had to train them in audiovisuals and enhance the usability of the videoconferencing tool and platform. The validation process is being carried out in five real scenarios at our university. During the whole period we are evaluating technical and pedagogical issues of this experience for both students and teachers to guide the future development of the service. Depending on the final results, the service of lectures recording will be available as educational resource for all of the teaching staff of our university.
Resumo:
Semantic technologies have become widely adopted in recent years, and choosing the right technologies for the problems that users face is often a difficult task. This paper presents an application of the Analytic Network Process for the recommendation of semantic technologies, which is based on a quality model for semantic technologies. Instead of relying on expert-based comparisons of alternatives, the comparisons in our framework depend on real evaluation results. Furthermore, the recommendations in our framework derive from user quality requirements, which leads to better recommendations tailored to users’ needs. This paper also presents an algorithm for pairwise comparisons, which is based on user quality requirements and evaluation results.
Resumo:
This article describes the work performed over the database of questions belonging to the different opinion polls carried during the last 50 years in Spain. Approximately half of the questions are provided with a title while the other half remain untitled. The work and implemented techniques in order to automatically generate the titles for untitled questions are described. This process is performed over very short texts and generated titles are subject to strong stylistic conventions and should be fully grammatical pieces of Spanish
Resumo:
There is growing concern over the challenges for innovation in Freight Pipeline industry. Since the early works of Chesbrough a decade ago, we have learned a lot about the content, context and process of open innovation. However, much more research is needed in Freight Pipeline Industry. The reality is that few corporations have institutionalized open innovation practices in ways that have enabled substantial growth or industry leadership. Based on this, we pursue the following question: How does a firm’s integration into knowledge networks depend on its ability to manage knowledge? A competence-based model for freight pipeline organizations is analysed, this model should be understood by any organization in order to be successful in motivating professionals who carry out innovations and play a main role in collaborative knowledge creation processes. This paper aims to explain how can open innovation achieve its potential in most Freight Pipeline Industries.
Resumo:
In parallel to the effort of creating Open Linked Data for the World Wide Web there is a number of projects aimed for developing the same technologies but in the context of their usage in closed environments such as private enterprises. In the paper, we present results of research on interlinking structured data for use in Idea Management Systems - a still rare breed of knowledge management systems dedicated to innovation management. In our study, we show the process of extending an ontology that initially covers only the Idea Management System structure towards the concept of linking with distributed enterprise data and public data using Semantic Web technologies. Furthermore we point out how the established links can help to solve the key problems of contemporary Idea Management Systems
Resumo:
Today, it is more and more important to develop competences in the learning process of the university students (that is to say, to acquire knowledge but also skills, abilities, attitudes and values). This is because professional practice requires that the future graduates design and market products, defend the interests of their clients, be introduced in the Administration or, even, in the Politics. Universities must form professionals that become social and opinion leaders, consultants, advisory, entrepreneurs and, in short, people with capacity to solve problems. This paper offers a tool to evaluate the application for the professor of different styles of management in the process of the student’s learning. Its main contribution consists on advancing toward the setting in practice of a model that overcomes the limitations of the traditional practices based on the masterful class, and that it has been applied in Portugal and Spain.
Resumo:
Any scientific publication aims to advance the field of knowledge that it deals with, and therefore the editorial staff will always be seeking the most revolutionary papers among all of those received. On the other hand, the reviewers’ task is usually a much more conservative one, as reviewers are responsible for verifying the realism of the methods proposed and the veracity of the claimed results.
Resumo:
This paper analyses how the internal resources of small- and medium-sized enterprises determine access (learning processes) to technology centres (TCs) or industrial research institutes (innovation infrastructure) in traditional low-tech clusters. These interactions basically represent traded (market-based) transactions, which constitute important sources of knowledge in clusters. The paper addresses the role of TCs in low-tech clusters, and uses semi-structured interviews with 80 firms in a manufacturing cluster. The results point out that producer–user interactions are the most frequent; thus, the higher the sector knowledge-intensive base, the more likely the utilization of the available research infrastructure becomes. Conversely, the sectors with less knowledge-intensive structures, i.e. less absorptive capacity (AC), present weak linkages to TCs, as they frequently prefer to interact with suppliers, who act as transceivers of knowledge. Therefore, not all the firms in a cluster can fully exploit the available research infrastructure, and their AC moderates this engagement. In addition, the existence of TCs is not sufficient since the active role of a firm's search strategies to undertake interactions and conduct openness to available sources of knowledge is also needed. The study has implications for policymakers and academia.
Resumo:
Expert systems are built from knowledge traditionally elicited from the human expert. It is precisely knowledge elicitation from the expert that is the bottleneck in expert system construction. On the other hand, a data mining system, which automatically extracts knowledge, needs expert guidance on the successive decisions to be made in each of the system phases. In this context, expert knowledge and data mining discovered knowledge can cooperate, maximizing their individual capabilities: data mining discovered knowledge can be used as a complementary source of knowledge for the expert system, whereas expert knowledge can be used to guide the data mining process. This article summarizes different examples of systems where there is cooperation between expert knowledge and data mining discovered knowledge and reports our experience of such cooperation gathered from a medical diagnosis project called Intelligent Interpretation of Isokinetics Data, which we developed. From that experience, a series of lessons were learned throughout project development. Some of these lessons are generally applicable and others pertain exclusively to certain project types.
Resumo:
onceptual design phase is partially supported by product lifecycle management/computer-aided design (PLM/CAD) systems causing discontinuity of the design information flow: customer needs — functional requirements — key characteristics — design parameters (DPs) — geometric DPs. Aiming to address this issue, it is proposed a knowledge-based approach is proposed to integrate quality function deployment, failure mode and effects analysis, and axiomatic design into a commercial PLM/CAD system. A case study, main subject of this article, was carried out to validate the proposed process, to evaluate, by a pilot development, how the commercial PLM/CAD modules and application programming interface could support the information flow, and based on the pilot scheme results to propose a full development framework.
Resumo:
This paper presents a data-intensive architecture that demonstrates the ability to support applications from a wide range of application domains, and support the different types of users involved in defining, designing and executing data-intensive processing tasks. The prototype architecture is introduced, and the pivotal role of DISPEL as a canonical language is explained. The architecture promotes the exploration and exploitation of distributed and heterogeneous data and spans the complete knowledge discovery process, from data preparation, to analysis, to evaluation and reiteration. The architecture evaluation included large-scale applications from astronomy, cosmology, hydrology, functional genetics, imaging processing and seismology.
Resumo:
There is no empirical evidence whatsoever to support most of the beliefs on which software construction is based. We do not yet know the adequacy, limits, qualities, costs and risks of the technologies used to develop software. Experimentation helps to check and convert beliefs and opinions into facts. This research is concerned with the replication area. Replication is a key component for gathering empirical evidence on software development that can be used in industry to build better software more efficiently. Replication has not been an easy thing to do in software engineering (SE) because the experimental paradigm applied to software development is still immature. Nowadays, a replication is executed mostly using a traditional replication package. But traditional replication packages do not appear, for some reason, to have been as effective as expected for transferring information among researchers in SE experimentation. The trouble spot appears to be the replication setup, caused by version management problems with materials, instruments, documents, etc. This has proved to be an obstacle to obtaining enough details about the experiment to be able to reproduce it as exactly as possible. We address the problem of information exchange among experimenters by developing a schema to characterize replications. We will adapt configuration management and product line ideas to support the experimentation process. This will enable researchers to make systematic decisions based on explicit knowledge rather than assumptions about replications. This research will output a replication support web environment. This environment will not only archive but also manage experimental materials flexibly enough to allow both similar and differentiated replications with massive experimental data storage. The platform should be accessible to several research groups working together on the same families of experiments.
Resumo:
Autonomous systems refer to systems capable of operating in a real world environment without any form of external control for extended periods of time. Autonomy is a desired goal for every system as it improves its performance, safety and profit. Ontologies are a way to conceptualize the knowledge of a specific domain. In this paper an ontology for the description of autonomous systems as well as for its development (engineering) is presented and applied to a process. This ontology is intended to be applied and used to generate final applications following a model driven methodology.
Resumo:
The main objective of this article is to focus on the analysis of teaching techniques, ranging from the use of the blackboard and chalk in old traditional classes, using slides and overhead projectors in the eighties and use of presentation software in the nineties, to the video, electronic board and network resources nowadays. Furthermore, all the aforementioned, is viewed under the different mentalities in which the teacher conditions the student using the new teaching technique, improving soft skills but maybe leading either to encouragement or disinterest, and including the lack of educational knowledge consolidation at scientific, technology and specific levels. In the same way, we study the process of adaptation required for teachers, the differences in the processes of information transfer and education towards the student, and even the existence of teachers who are not any longer appealed by their work due which has become much simpler due to new technologies and the greater ease in the development of classes due to the criteria described on the new Grade Programs adopted by the European Higher Education Area. Moreover, it is also intended to understand the evolution of students’ profiles, from the eighties to present time, in order to understand certain attitudes, behaviours, accomplishments and acknowledgements acquired over the semesters within the degree Programs. As an Educational Innovation Group, another key question also arises. What will be the learning techniques in the future?. How these evolving matters will affect both positively and negatively on the mentality, attitude, behaviour, learning, achievement of goals and satisfaction levels of all elements involved in universities’ education? Clearly, this evolution from chalk to the electronic board, the three-dimensional view of our works and their sequence, greatly facilitates the understanding and adaptation later on to the business world, but does not answer to the unknowns regarding the knowledge and the full development of achievement’s indicators in basic skills of a degree. This is the underlying question which steers the roots of the presented research.