809 resultados para Computer science in education
Resumo:
Fragmentation on dynamically reconfigurable FPGAs is a major obstacle to the efficient management of the logic space in reconfigurable systems. When resource allocation decisions have to be made at run-time a rearrangement may be necessary to release enough contiguous resources to implement incoming functions. The feasibility of run-time relocation depends on the processing time required to set up rearrangements. Moreover, the performance of the relocated functions should not be affected by this process or otherwise the whole system performance, and even its operation, may be at risk. Relocation should take into account not only specific functional issues, but also the FPGA architecture, since these two aspects are normally intertwined. A simple and fast method to assess performance degradation of a function during relocation and to speed up the defragmentation process, based on previous function labelling and on the application of the Euclidian distance concept, is proposed in this paper.
Resumo:
The developments of digital technology have opened new outlooks for online education which offer students the flexibility to learn at any time and any place. With all this instructional changes instructors, in all levels of the educational chain have been compelled to adapt quickly to this reality. They have a wide diversity of tools available to grab student’s attention and motivate them to embrace the knowledge in their own learning process. One of these resources is the use of videos. Through them lecturers can deliver complex information and contents to students and, if used creatively, videos can become a powerful technological tool in education. In this article we will explore some of the potential benefits and challenges associated with the use of videos in the teaching and learning process at higher education levels. We will also discuss some thoughts and examples for the use of teaching materials to enhance student’s learning and try to change ideas about the potentialities and future of video’s annotation new software resources, as incoming open tools for group work involvement.
Resumo:
The developments of digital technology have opened new outlooks for online education which offer students the flexibility to learn at any time and any place. With all this instructional changes teachers, in all levels of the educational chain have been compelled to adapt quickly to this reality. They have a wide diversity of tools available to grab student’s attention and to motivate them to embrace the knowledge in their own learning process. One of these resources is the use of videos. Through them teachers can deliver complex information and contents to students and, if used creatively, videos can become a powerful technological tool in education. In this article we will explore some of the potential benefits and challenges associated with the use of videos in the teaching and learning process at higher education levels. We will also discuss some thoughts and examples for the use of teaching materials to enhance student’s learning and try to share ideas about the potentialities and future of video’s annotation new software resources, as incoming open tools for group work involvement.
Resumo:
The results presented in the article are part of a wider PhD project developed under the Doctoral program in Multimedia in Education from the University of Aveiro. The project, which sought to understand student ID in Higher education through the use of Digital Storytelling, was made possible through the Doctoral Grant awarded by Fundação para a Ciência e Tecnologia (FCT).
Resumo:
Decentralised co-operative multi-agent systems are computational systems where conflicts are frequent due to the nature of the represented knowledge. Negotiation methodologies, in this case argumentation based negotiation methodologies, were developed and applied to solve unforeseeable and, therefore, unavoidable conflicts. The supporting computational model is a distributed belief revision system where argumentation plays the decisive role of revision. The distributed belief revision system detects, isolates and solves, whenever possible, the identified conflicts. The detection and isolation of the conflicts is automatically performed by the distributed consistency mechanism and the resolution of the conflict, or belief revision, is achieved via argumentation. We propose and describe two argumentation protocols intended to solve different types of identified information conflicts: context dependent and context independent conflicts. While the protocol for context dependent conflicts generates new consensual alternatives, the latter chooses to adopt the soundest, strongest argument presented. The paper shows the suitability of using argumentation as a distributed decentralised belief revision protocol to solve unavoidable conflicts.
Resumo:
Multi-agent architectures are well suited for complex inherently distributed problem solving domains. From the many challenging aspects that arise within this framework, a crucial one emerges: how to incorporate dynamic and conflicting agent beliefs? While the belief revision activity in a single agent scenario is concentrated on incorporating new information while preserving consistency, in a multi-agent system it also has to deal with possible conflicts between the agents perspectives. To provide an adequate framework, each agent, built as a combination of an assumption based belief revision system and a cooperation layer, was enriched with additional features: a distributed search control mechanism allowing dynamic context management, and a set of different distributed consistency methodologies. As a result, a Distributed Belief Revision Testbed (DiBeRT) was developed. This paper is a preliminary report presenting some of DiBeRT contributions: a concise representation of external beliefs; a simple and innovative methodology to achieve distributed context management; and a reduced inter-agent data exchange format.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer Science
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Master in Computer Science
Resumo:
Software tools in education became popular since the widespread of personal computers. Engineering courses lead the way in this development and these tools became almost a standard. Engineering graduates are familiar with numerical analysis tools but also with simulators (e.g. electronic circuits), computer assisted design tools and others, depending on the degree. One of the main problems with these tools is when and how to start use them so that they can be beneficial to students and not mere substitutes for potentially difficult calculations or design. In this paper a software tool to be used by first year students in electronics/electricity courses is presented. The growing acknowledgement and acceptance of open source software lead to the choice of an open source software tool – Scilab, which is a numerical analysis tool – to develop a toolbox. The toolbox was developed to be used as standalone or integrated in an e-learning platform. The e-learning platform used was Moodle. The first approach was to assess the mathematical skills necessary to solve all the problems related to electronics and electricity courses. Analysing the existing circuit simulators software tools, it is clear that even though they are very helpful by showing the end result they are not so effective in the process of the students studying and self learning since they show results but not intermediate steps which are crucial in problems that involve derivatives or integrals. Also, they are not very effective in obtaining graphical results that could be used to elaborate reports and for an overall better comprehension of the results. The developed tool was based on the numerical analysis software Scilab and is a toolbox that gives their users the opportunity to obtain the end results of a circuit analysis but also the expressions obtained when derivative and integrals calculations, plot signals, obtain vector diagrams, etc. The toolbox runs entirely in the Moodle web platform and provides the same results as the standalone application. The students can use the toolbox through the web platform (in computers where they don't have installation privileges) or in their personal computers by installing both the Scilab software and the toolbox. This approach was designed for first year students from all engineering degrees that have electronics/electricity courses in their curricula.
Resumo:
The usage of information and communication technologies has been growing among students and teachers. In order to improve the use of the Internet as a tool to support teaching and learning it is necessary to understand the Internet usage habits of students. Thus, a study was conducted with 1397 students from five schools of the Polytechnic of Porto. The data was collected through an online questionnaire abd was analized by age range, gender and scientific field. In this paper,gender differences are analyzed and presented in 3 dimensions: type of Internet usage, communication tools and the role of the Internet tools in education.
Resumo:
The Internet plays an important role in higher education institutions where Learning Management Systems (LMS) occupies a main role in the eLearning realm. In this chapter we aim to characterize the Internet and LMS usage patterns and their role in the largest Portuguese Polytechnic Institute. The usage patterns were analyzed in two components: characterization of Internet usage and the role of Internet and LMS in education. Using a quantitative approach, the data analysis describes the differences between gender, age and scientific fields. The carried qualitative analysis allows a better understanding of students’ both motivations, opinions and suggestions of improvement. The outcome of this work is the presentation of the Portuguese students’ profile regarding Internet and LMS usage patterns. We expect that these results can be used to select the most suitable digital pedagogical processes and tools to be adopted regarding the learning process and most adequate LMS’s policies.
Resumo:
In a scientific research project is important to define the underlying philosophical orientation of the project, because this will influence the choices made in respect of scientific methods used, as well as the way they will be applied. It is crucial, therefore, that the philosophy and research design strategy are consistent with each other. These questions become even more relevant in qualitative research. Historically, the interpretive research philosophy is more associated to the scientific areas of social sciences and humanities where the subjectivity inherent to human intervention is more explicitly defined. Information systems field are, primarily, trapped in computer science field, though it also integrates issues related with management and organizations field. This shift from a purely technological guidance for the consideration of the problems of management and organizations has fostered the rise of research projects according to the interpretive philosophy and using qualitative methods. This paper explores the importance of alignment between the epistemological orientation and research design strategy, in qualitative research projects. As a result, it is presented two PhD projects, with different research design strategies, that are being developed in the technology and information systems field, in the light of the interpretive paradigm.
Resumo:
O ensaio de dureza, e mais concretamente o ensaio de micro dureza Vickers, é no universo dos ensaios mecânicos um dos mais utilizados quer seja na indústria, no ensino ou na investigação e desenvolvimento de produto no âmbito das ciências dos materiais. Na grande maioria dos casos, a utilização deste ensaio tem como principal aplicação a caracterização ou controlo da qualidade de fabrico de materiais metálicos. Sendo um ensaio de relativa simplicidade de execução, rapidez e com resultados comparáveis e relacionáveis a outras grandezas físicas das propriedades dos materiais. Contudo, e tratando-se de um método de ensaio cuja intervenção humana é importante, na medição da indentação gerada por penetração mecânica através de um sistema ótico, não deixa de exibir algumas debilidades que daí advêm, como sendo o treino dos técnicos e respetivas acuidades visuais, fenómenos de fadiga visual que afetam os resultados ao longo de um turno de trabalho; ora estes fenómenos afetam a repetibilidade e reprodutibilidade dos resultados obtidos no ensaio. O CINFU possui um micro durómetro Vickers, cuja realização dos ensaios depende de um técnico treinado para a execução do mesmo, apresentando todas as debilidades já mencionadas e que o tornou elegível para o estudo e aplicação de uma solução alternativa. Assim, esta dissertação apresenta o desenvolvimento de uma solução alternativa ao método ótico convencional na medição de micro dureza Vickers. Utilizando programação em LabVIEW da National Instruments, juntamente com as ferramentas de visão computacional (NI Vision), o programa começa por solicitar ao técnico a seleção da câmara para aquisição da imagem digital acoplada ao micro durómetro, seleção do método de ensaio (Força de ensaio); posteriormente o programa efetua o tratamento da imagem (aplicação de filtros para eliminação do ruído de fundo da imagem original), segue-se, por indicação do operador, a zona de interesse (ROI) e por sua vez são identificadas automaticamente os vértices da calote e respetivas distâncias das diagonais geradas concluindo, após aceitação das mesmas, com o respetivo cálculo de micro dureza resultante. Para validação dos resultados foram utilizados blocos-padrão de dureza certificada (CRM), cujos resultados foram satisfatórios, tendo-se obtido um elevado nível de exatidão nas medições efetuadas. Por fim, desenvolveu-se uma folha de cálculo em Excel com a determinação da incerteza associada às medições de micro dureza Vickers. Foram então comparados os resultados nas duas metodologias possíveis, pelo método ótico convencional e pela utilização das ferramentas de visão computacional, tendo-se obtido bons resultados com a solução proposta.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica