47 resultados para Database management -- Computer programs


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most difficult issues of e-Learning is the students’ assessment. Being this an outstanding task regarding theoretical topics, it becomes even more challenging when the topics under evaluation are practical. ISCAP’s Information Systems Department is composed of about twenty teachers who have been for several years using an e-learning environment (at the moment Moodle 2.3) combined with traditional assessment. They are now planning and implementing a new e-learning assessment strategy. This effort was undertaken in order to evaluate a practical topic (the use of spreadsheets to solve management problems) common to shared courses of several undergraduate degree programs. The same team group is already experienced in the assessment of theoretical information systems topics using the b-learning platform. Therefore, this project works as an extension to previous experiences being the team aware of the additional difficulties due to the practical nature of the topics. This paper describes this project and presents two cycles of the action research methodology, used to conduct the research. The first cycle goal was to produce a database of questions. When it was implemented in order to be used with a pilot group of students, several problems were identified. Subsequently, the second cycle consisted in solving the identified problems preparing the database and all the players to a broader scope implementation. For each cycle, all the phases, its drawbacks and achievements are described. This paper suits all those who are or are planning to be in the process of shifting their assessment strategy from a traditional to one supported by an e-learning platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação apresenta o trabalho realizado no âmbito da unidade curricular de Tese / Dissertação (TEDI) do Mestrado em Engenharia Eletrotécnica e de Computadores – Especialização em Automação e Sistemas em parceria com a empresa Live Simply, uma empresa de domótica que decidiu apostar na inovação e no desenvolvimento de serviços e produtos de valor acrescentado para consolidar a sua posição no mercado. Neste contexto, foram identificadas como mais-valias para a Live Simply a conceção, por um lado, de uma ferramenta de apoio técnico de integração e simplificação das fases de projeto, configuração e gestão de instalações domóticas e, por outro lado, de uma interface com a instalação para o cliente consultar e alterar, em tempo real, o estado dos atuadores. Depois de analisadas as tecnologias disponíveis, selecionaram-se as soluções a adotar (linguagens de programação, servidores de base de dados e ambientes de desenvolvimento), definiu-se a arquitetura do sistema, detalhando-se os módulos de projeto, configuração e gestão de instalações, a estrutura da base de dados assim como o hardware de controlo da instalação. De seguida, procedeu-se ao desenvolvimento dos módulos de software e à configuração e programação do módulo de hardware. Por último, procedeu-se a um conjunto exaustivo de testes aos diferentes módulos que demonstraram o correto funcionamento da ferramenta e a adequação das tecnologias empregues. A ferramenta de apoio técnico realizada integra as fases do projeto, configuração e gestão de instalações domóticas, permitindo melhorar o desempenho dos técnicos e a resposta aos clientes. A interface oferecida ao dono da instalação é uma interface Web de aspeto amigável e fácil utilização que permite consultar e modificar em tempo real o estado da instalação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O presente trabalho enquadra-se na área das redes de computadores, fazendo referência aos protocolos e ao conjunto de equipamentos e softwares necessários para a administração, controlo e monitorização desse tipos de infra-estruturas. Para a gestão de uma rede de dados, é essencial dispor de conhecimentos e documentação de nível técnico para representar da forma mais fiel possível a configuração da rede, seguindo passo a passo a interligação entre os equipamentos existentes e oferecendo assim uma visão o mais fidedigna possível das instalações. O protocolo SNMP é utilizado em larga escala sendo praticamente um standard para a administração de redes baseadas na tecnologia TCP/IP. Este protocolo define a comunicação entre um administrador e um agente, estabelecendo o formato e o significado das mensagens trocadas entre ambos. Tem a capacidade de suportar produtos de diferentes fabricantes, permitindo ao administrador manter uma base de dados com informações relevantes da monitorização de vários equipamentos, que pode ser consultada e analisada por softwares NMS concebidos especialmente para a gestão de redes de computadores. O trabalho apresentado nesta dissertação teve como objectivo desenvolver uma ferramenta para apoiar à gestão da infra-estrutura de comunicações do Aeroporto Francisco Sá Carneiro que permitisse conhecer em tempo real o estado dos elementos de rede, ajudar no diagnóstico de possíveis problemas e ainda apoiar a tarefa de planeamento e expansão da rede instalada. A ferramenta desenvolvida utiliza as potencialidades do protocolo SNMP para adquirir dados de monitorização de equipamentos de rede presentes na rede do AFSC, disponibilizando-os numa interface gráfica para facilitar a visualização dos parâmetros e alertas de funcionamento mais importantes na administração da rede.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Future distribution systems will have to deal with an intensive penetration of distributed energy resources ensuring reliable and secure operation according to the smart grid paradigm. SCADA (Supervisory Control and Data Acquisition) is an essential infrastructure for this evolution. This paper proposes a new conceptual design of an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). This SCADA model is used to support the energy resource management undertaken by a distribution network operator (DNO). Resource management considers all the involved costs, power flows, and electricity prices, allowing the use of network reconfiguration and load curtailment. Locational Marginal Prices (LMP) are evaluated and used in specific situations to apply Demand Response (DR) programs on a global or a local basis. The paper includes a case study using a 114 bus distribution network and load demand based on real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building reliable real-time applications on top of commercial off-the-shelf (COTS) components is not a straightforward task. Thus, it is essential to provide a simple and transparent programming model, in order to abstract programmers from the low-level implementation details of distribution and replication. However, the recent trend for incorporating pre-emptive multitasking applications in reliable real-time systems inherently increases its complexity. It is therefore important to provide a transparent programming model, enabling pre-emptive multitasking applications to be implemented without resorting to simultaneously dealing with both system requirements and distribution and replication issues. The distributed embedded architecture using COTS components (DEAR-COTS) architecture has been previously proposed as an architecture to support real-time and reliable distributed computer-controlled systems (DCCS) using COTS components. Within the DEAR-COTS architecture, the hard real-time subsystem provides a framework for the development of reliable real-time applications, which are the core of DCCS applications. This paper presents the proposed framework, and demonstrates how it can be used to support the transparent replication of software components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With advancement in computer science and information technology, computing systems are becoming increasingly more complex with an increasing number of heterogeneous components. They are thus becoming more difficult to monitor, manage, and maintain. This process has been well known as labor intensive and error prone. In addition, traditional approaches for system management are difficult to keep up with the rapidly changing environments. There is a need for automatic and efficient approaches to monitor and manage complex computing systems. In this paper, we propose an innovative framework for scheduling system management by combining Autonomic Computing (AC) paradigm, Multi-Agent Systems (MAS) and Nature Inspired Optimization Techniques (NIT). Additionally, we consider the resolution of realistic problems. The scheduling of a Cutting and Treatment Stainless Steel Sheet Line will be evaluated. Results show that proposed approach has advantages when compared with other scheduling systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação enquadra-se no âmbito dos Sistemas de Informação, em concreto, no desenvolvimento de aplicações Web, como é o caso de um website. Com a utilização em larga escala dos meios tecnológicos tem-se verificado um crescimento exponencial dos mesmos, o que se traduz na facilidade com que podem ser encontradas na Internet diversos tipos de plataformas informáticas. Além disso, hoje em dia, uma grande parte das organizações possui o seu próprio sítio na Internet, onde procede à divulgação dos seus serviços e/ou produtos. Pretende-se com esta dissertação explorar estas novas tecnologias, nomeadamente, os diagramas UML - Unified Modeling Language e a concepção de bases de dados, e posteriormente desenvolver um website. Com o desenvolvimento deste website não se propõe a criação de uma nova tecnologia, mas o uso de diversas tecnologias em conjunto com recurso às ferramentas UML. Este encontra-se organizado em três fases principais: análise de requisitos, implementação e desenho das interfaces. Na análise de requisitos efectuou-se o levantamento dos objectivos propostos para o sistema e das necessidades/requisitos necessários à sua implementação, auxiliado essencialmente pelo Diagrama de Use Cases do sistema. Na fase de implementação foram elaborados os arquivos e directórios que formam a arquitectura lógica de acordo com os modelos descritos no Diagrama de Classes e no Diagrama de Entidade-Relação. Os requisitos identificados foram analisados e usados na composição das interfaces e sistema de navegação. Por fim, na fase de desenho das interfaces foram aperfeiçoadas as interfaces desenvolvidas, com base no conceito artístico e criativo do autor. Este aperfeiçoamento vai de encontro ao gosto pessoal e tem como objectivo elaborar uma interface que possa também agradar ao maior número possível de utilizadores. Este pode ser observado na maneira como se encontram distribuídas as ligações (links) entre páginas, nos títulos, nos cabeçalhos, nas cores e animações e no seu design em geral. Para o desenvolvimento do website foram utilizadas diferentes linguagens de programação, nomeadamente a HyperText Markup Language (HTML), a Page Hypertext Preprocessor (PHP) e Javascript. A HTML foi utilizada para a disposição de todo o conteúdo visível das páginas e para definição do layout das mesmas e a PHP para executar pequenos scripts que permitem interagir com as diferentes funcionalidades do site. A linguagem Javascript foi usada para definir o design das páginas e incluir alguns efeitos visuais nas mesmas. Para a construção das páginas que compõem o website foi utilizado o software Macromedia Dreamweaver, o que simplificou a sua implementação pela facilidade com que estas podem ser construídas. Para interacção com o sistema de gestão da base de dados, o MySQL, foi utilizada a aplicação phpMyAdmin, que simplifica o acesso à base de dados, permitindo definir, manipular e consultar os seus dados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is widely accepted that solving programming exercises is fundamental to learn how to program. Nevertheless, solving exercises is only effective if students receive an assessment on their work. An exercise solved wrong will consolidate a false belief, and without feedback many students will not be able to overcome their difficulties. However, creating, managing and accessing a large number of exercises, covering all the points in the curricula of a programming course, in classes with large number of students, can be a daunting task without the appropriated tools working in unison. This involves a diversity of tools, from the environments where programs are coded, to automatic program evaluators providing feedback on the attempts of students, passing through the authoring, management and sequencing of programming exercises as learning objects. We believe that the integration of these tools will have a great impact in acquiring programming skills. Our research objective is to manage and coordinate a network of eLearning systems where students can solve computer programming exercises. Networks of this kind include systems such as learning management systems (LMS), evaluation engines (EE), learning objects repositories (LOR) and exercise resolution environments (ERE). Our strategy to achieve the interoperability among these tools is based on a shared definition of programming exercise as a Learning Object (LO).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of Laptops and the Internet has produced the technological conditions for instructors and students can take advantage from the diversity of online information, communication, collaboration and sharing with others. The integration of Internet services in the teaching practices can be responsible for thematic, social and digital improvement for the agents involved. There are many benefits when we use a Learning Management Systems (LMS) such as Moodle, to support the lectures in higher education. We also will consider its implications for student support and online interaction, leading educational agents to a collaborating of different learning environments, where they can combine face-to-face instruction with computer-mediated instruction, blended-learning, and increases the possibilities for better quality and quantity of human communication in a learning background. In general components of learning management systems contain synchronous and asynchronous communication tools, management features, and assessment utilities. These assessment utilities allow lecturers to systematize basic assessment tasks. Assessments can be straightaway delivered to the student, and upon conclusion, immediately returned with grades and detailed feedback. Therefore learning management systems can also be used for assessment purposes in Higher Education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-agent architectures are well suited for complex inherently distributed problem solving domains. From the many challenging aspects that arise within this framework, a crucial one emerges: how to incorporate dynamic and conflicting agent beliefs? While the belief revision activity in a single agent scenario is concentrated on incorporating new information while preserving consistency, in a multi-agent system it also has to deal with possible conflicts between the agents perspectives. To provide an adequate framework, each agent, built as a combination of an assumption based belief revision system and a cooperation layer, was enriched with additional features: a distributed search control mechanism allowing dynamic context management, and a set of different distributed consistency methodologies. As a result, a Distributed Belief Revision Testbed (DiBeRT) was developed. This paper is a preliminary report presenting some of DiBeRT contributions: a concise representation of external beliefs; a simple and innovative methodology to achieve distributed context management; and a reduced inter-agent data exchange format.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The changes introduced into the European Higher Education Area (EHEA) by the Bologna Process, together with renewed pedagogical and methodological practices, have created a new teaching-learning paradigm: Student-Centred Learning. In addition, the last few years have been characterized by the application of Information Technologies, especially the Semantic Web, not only to the teaching-learning process, but also to administrative processes within learning institutions. On one hand, the aim of this study was to present a model for identifying and classifying Competencies and Learning Outcomes and, on the other hand, the computer applications of the information management model were developed, namely a relational Database and an Ontology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTED2010, the 4th International Technology, Education and Development Conference was held in Valencia (Spain), on March 8, 9 and 10, 2010.