39 resultados para Prototype Verification System
em Instituto Politécnico do Porto, Portugal
Resumo:
Debugging electronic circuits is traditionally done with bench equipment directly connected to the circuit under debug. In the digital domain, the difficulties associated with the direct physical access to circuit nodes led to the inclusion of resources providing support to that activity, first at the printed circuit level, and then at the integrated circuit level. The experience acquired with those solutions led to the emergence of dedicated infrastructures for debugging cores at the system-on-chip level. However, all these developments had a small impact in the analog and mixed-signal domain, where debugging still depends, to a large extent, on direct physical access to circuit nodes. As a consequence, when analog and mixed-signal circuits are integrated as cores inside a system-on-chip, the difficulties associated with debugging increase, which cause the time-to-market and the prototype verification costs to also increase. The present work considers the IEEE1149.4 infrastructure as a means to support the debugging of mixed-signal circuits, namely to access the circuit nodes and also an embedded debug mechanism named mixed-signal condition detector, necessary for watch-/breakpoints and real-time analysis operations. One of the main advantages associated with the proposed solution is the seamless migration to the system-on-chip level, as the access is done through electronic means, thus easing debugging operations at different hierarchical levels.
Resumo:
Hoje em dia existem múltiplas aplicações multimédia na Internet, sendo comum qualquer website apresentar mais de uma forma de visualização de informação além do texto como, por exemplo: imagens, áudio, vídeo e animação. Com aumento do consumo e utilização de Smartphone e Tablets, o volume de tráfego de internet móvel tem vindo a crescer rapidamente, bem como o acesso à internet através da televisão. As aplicações web-based ganham maior relevância devido à maior partilha ou consumo de conteúdos multimédia, com ou sem edição ou manipulação da mesma, através de redes sociais, como o Facebook. Neste documento é apresentado o estudo de alternativas HTML5 e a implementação duma aplicação web-based no âmbito do Mestrado de Engenharia Informática, ramo de Sistemas Gráficos e Multimédia, no Instituto Superior Engenharia do Porto (ISEP). A aplicação tem como objetivo a edição e manipulação de imagens, tanto em desktop como em dispositivos móveis, sendo este processo exclusivamente feito no lado do cliente, ou seja, no Browser do utilizador. O servidor é usado somente para o armazenamento da aplicação. Durante o desenvolvimento do projeto foi realizado um estudo de soluções de edição e manipulação de imagem existentes no mercado, com a respetiva análise de comparação e apresentadas tecnologias Web modernas como HTML5, CSS3 e JavaScript, que permitirão desenvolver o protótipo. Posteriormente, serão apresentadas, detalhadamente, as várias fases do desenvolvimento de um protótipo, desde a análise do sistema, à apresentação do protótipo e indicação das tecnologias utilizadas. Também serão apresentados os resultados dos inquéritos efetuados a um grupo de pessoas que testaram esse protótipo. Finalmente, descrever-se-á de forma mais exaustiva, a implementação e serão apontadas dificuldades encontradas ao longo do desenvolvimento, bem como indicadas futuras melhorias a introduzir.
Resumo:
A manufacturing system has a natural dynamic nature observed through several kinds of random occurrences and perturbations on working conditions and requirements over time. For this kind of environment it is important the ability to efficient and effectively adapt, on a continuous basis, existing schedules according to the referred disturbances, keeping performance levels. The application of Meta-Heuristics and Multi-Agent Systems to the resolution of this class of real world scheduling problems seems really promising. This paper presents a prototype for MASDScheGATS (Multi-Agent System for Distributed Manufacturing Scheduling with Genetic Algorithms and Tabu Search).
Resumo:
A good verification strategy should bring near the simulation and real functioning environments. In this paper we describe a system-level co-verification strategy that uses a common flow for functional simulation, timing simulation and functional debug. This last step requires using a BST infrastructure, now widely available on commercial devices, specially on FPGAs with medium/large pin-counts.
Resumo:
Many of the most common human functions such as temporal and non-monotonic reasoning have not yet been fully mapped in developed systems, even though some theoretical breakthroughs have already been accomplished. This is mainly due to the inherent computational complexity of the theoretical approaches. In the particular area of fault diagnosis in power systems however, some systems which tried to solve the problem, have been deployed using methodologies such as production rule based expert systems, neural networks, recognition of chronicles, fuzzy expert systems, etc. SPARSE (from the Portuguese acronym, which means expert system for incident analysis and restoration support) was one of the developed systems and, in the sequence of its development, came the need to cope with incomplete and/or incorrect information as well as the traditional problems for power systems fault diagnosis based on SCADA (supervisory control and data acquisition) information retrieval, namely real-time operation, huge amounts of information, etc. This paper presents an architecture for a decision support system, which can solve the presented problems, using a symbiosis of the event calculus and the default reasoning rule based system paradigms, insuring soft real-time operation with incomplete, incorrect or domain incoherent information handling ability. A prototype implementation of this system is already at work in the control centre of the Portuguese Transmission Network.
Resumo:
Currently, power systems (PS) already accommodate a substantial penetration of distributed generation (DG) and operate in competitive environments. In the future, as the result of the liberalisation and political regulations, PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage and provide market agents to ensure a flexible and secure operation. This cannot be done with the traditional PS operational tools used today like the quite restricted information systems Supervisory Control and Data Acquisition (SCADA) [1]. The trend to use the local generation in the active operation of the power system requires new solutions for data management system. The relevant standards have been developed separately in the last few years so there is a need to unify them in order to receive a common and interoperable solution. For the distribution operation the CIM models described in the IEC 61968/70 are especially relevant. In Europe dispersed and renewable energy resources (D&RER) are mostly operated without remote control mechanisms and feed the maximal amount of available power into the grid. To improve the network operation performance the idea of virtual power plants (VPP) will become a reality. In the future power generation of D&RER will be scheduled with a high accuracy. In order to realize VPP decentralized energy management, communication facilities are needed that have standardized interfaces and protocols. IEC 61850 is suitable to serve as a general standard for all communication tasks in power systems [2]. The paper deals with international activities and experiences in the implementation of a new data management and communication concept in the distribution system. The difficulties in the coordination of the inconsistent developed in parallel communication and data management standards - are first addressed in the paper. The upcoming unification work taking into account the growing role of D&RER in the PS is shown. It is possible to overcome the lag in current practical experiences using new tools for creating and maintenance the CIM data and simulation of the IEC 61850 protocol – the prototype of which is presented in the paper –. The origin and the accuracy of the data requirements depend on the data use (e.g. operation or planning) so some remarks concerning the definition of the digital interface incorporated in the merging unit idea from the power utility point of view are presented in the paper too. To summarize some required future work has been identified.
Resumo:
As the complexity of embedded systems increases, multiple services have to compete for the limited resources of a single device. This situation is particularly critical for small embedded devices used in consumer electronics, telecommunication, industrial automation, or automotive systems. In fact, in order to satisfy a set of constraints related to weight, space, and energy consumption, these systems are typically built using microprocessors with lower processing power and limited resources. The CooperatES framework has recently been proposed to tackle these challenges, allowing resource constrained devices to collectively execute services with their neighbours in order to fulfil the complex Quality of Service (QoS) constraints imposed by users and applications. In order to demonstrate the framework's concepts, a prototype is being implemented in the Android platform. This paper discusses key challenges that must be addressed and possible directions to incorporate the desired real-time behaviour in Android.
Resumo:
Waste oil recycling companies play a very important role in our society. Competition among companies is tough and process optimization is essential for survival. By equipping oil containers with a level monitoring system that periodically reports the level and alerts when it reaches the preset threshold, the oil recycling companies are able to streamline the oil collection process and, thus, reduce the operation costs while maintaining the quality of service. This paper describes the development of this level monitoring system by a team of four students from different engineering backgrounds and nationalities. The team conducted a study of the state of the art, draw marketing and sustainable development plans and, finally, designed and implemented a prototype that continuously measures the container content level and sends an alert message as soon as it reaches the preset capacity.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
O registo clinico é um documento hospitalar confidencial, nele é registado todo o percurso clinico de um determinado paciente. Esse percurso é registado através da atribuição de códigos descritos num sistema de classificação, também denominados de dados da codificação. Os dados da codificação são armazenados electronicamente numa Base de Dados, e é a partir dela que certas medidas, tais como facturação hospitalar, financiamentos hospitalares anuais, tratamentos médicos, ou até mesmo dados epidemiológicos, são adotadas. Portanto torna-se fundamental a garantia da qualidade na área da codificação clinica, para isso é necessário recorrer a processos de auditorias aos registos clínicos. O processo de auditoria é uma atividade independente de avaliação, cujo objetivo visa acrescentar valor e melhorar os objetos e processos da auditoria. Atualmente uma ferramenta denominada de Programa Auditor é utilizada, contudo essa ferramenta demonstra uma tecnologia já ultrapassada. A Tese que se pretende defender é a de que, através de Sistemas Periciais, é possível realizar auditorias internas aos registos clínicos. Pretende-se ainda demonstrar que dos Sistemas Periciais e um benefício para os peritos na área, pois diminui a probabilidade de erros e torna o processo de verificação menos moroso. Neste contexto, o objetivo desta Dissertação prende-se em definir e implementar um Sistema Pericial que permita a auditoria de registos clínicos hospitalares. O Sistema Pericial dever a ainda ser capaz de traduzir o raciocínio do perito, detectando assim diversos tipos de erros que traduzem num registo clínico não conforme. Por sua vez, foi desenvolvido um protótipo de um Sistema Pericial para auditoria de registos clínicos em linguagem PROLOG. Este mesmo protótipo serviu de base à realização de uma experiência que permitiu comparar com o programa Auditor e verificar a sua aplicabilidade no dia-a-dia hospitalar.
Resumo:
Poster presented in The 28th GI/ITG International Conference on Architecture of Computing Systems (ARCS 2015). 24 to 26, Mar, 2015. Porto, Portugal.
Resumo:
Presented at SEMINAR "ACTION TEMPS RÉEL:INFRASTRUCTURES ET SERVICES SYSTÉMES". 10, Apr, 2015. Brussels, Belgium.
Resumo:
The goal of this project, one of the proposals of the EPS@ISEP Spring 2014, was to develop an Aquaponics System. Over recent years Aquaponics systems have received increased attention since they contribute to reduce the strain on resources within 1st and 3rd world countries. Aquaponics is the combination of Hydroponics and Aquaculture, mimicking a natural environment in order to successfully apply and enhance the understanding of natural cycles within an indoor process. Using this knowledge of natural cycles, it was possible to create a system with capabilities similar to that of a natural environment with the support of electronics, enhancing the overall efficiency of the system. The multinational team involved in the development of this system was composed of five students from five countries and fields of study. This paper describes their solution, including the overall design, the technology involved and the benefits it can bring to the current market. The team was able to design and render the Computer Aided Design (CAD) drawings of the prototype, assemble all components, successfully test the electronics and comply with the budget. Furthermore, the designed solution was supported by a product sustainability study and included a specific marketing plan. Last but not least, the students enrolled in this project obtained new multidisciplinary knowledge and increased their team work and cross-cultural communication skills.
Resumo:
In this paper the problem of the evolution of an object-oriented database in the context of orthogonal persistent programming systems is addressed. We have observed two characteristics in that type of systems that offer particular conditions to implement the evolution in a semi-transparent fashion. That transparency can further be enhanced with the obliviousness provided by the Aspect-Oriented Programming techniques. Was conceived a meta-model and developed a prototype to test the feasibility of our approach. The system allows programs, written to a schema, access semi-transparently to data in other versions of the schema.
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.