907 resultados para Programming frameworks
Resumo:
This paper presents a mechanically verified implementation of an algorithm for deciding the equivalence of Kleene algebra terms within the Coq proof assistant. The algorithm decides equivalence of two given regular expressions through an iterated process of testing the equivalence of their partial derivatives and does not require the construction of the corresponding automata. Recent theoretical and experimental research provides evidence that this method is, on average, more efficient than the classical methods based on automata. We present some performance tests, comparisons with similar approaches, and also introduce a generalization of the algorithm to decide the equivalence of terms of Kleene algebra with tests. The motivation for the work presented in this paper is that of using the libraries developed as trusted frameworks for carrying out certified program verification.
Resumo:
Disserta ção apresentada para obten ção do Grau de Mestre em Engenharia Química e Bioquímica
Resumo:
These are the proceedings for the eighth national conference on XML, its Associated Technologies and its Applications (XATA'2010). The paper selection resulted in 33% of papers accepted as full papers, and 33% of papers accepted as short papers. While these two types of papers were distinguish during the conference, and they had different talk duration, they all had the same limit of 12 pages. We are happy that the selected papers focus both aspects of the conference: XML technologies, and XML applications. In the first group we can include the articles on parsing and transformation technologies, like “Processing XML: a rewriting system approach", “Visual Programming of XSLT from examples", “A Refactoring Model for XML Documents", “A Performance based Approach for Processing Large XML Files in Multicore Machines", “XML to paper publishing with manual intervention" and “Parsing XML Documents in Java using Annotations". XML-core related papers are also available, focusing XML tools testing on “Test::XML::Generator: Generating XML for Unit Testing" and “XML Archive for Testing: a benchmark for GuessXQ". XML as the base for application development is also present, being discussed on different areas, like “Web Service for Interactive Products and Orders Configuration", “XML Description for Automata Manipulations", “Integration of repositories in Moodle", “XML, Annotations and Database: a Comparative Study of Metadata Definition Strategies for Frameworks", “CardioML: Integrating Personal Cardiac Information for Ubiquous Diagnosis and Analysis", “A Semantic Representation of Users Emotions when Watching Videos" and “Integrating SVG and SMIL in DAISY DTB production to enhance the contents accessibility in the Open Library for Higher Education". The wide spread of subjects makes us believe that for the time being XML is here to stay what enhances the importance of gathering this community to discuss related science and technology. Small conferences are traversing a bad period. Authors look for impact and numbers and only submit their works to big conferences sponsored by the right institutions. However the group of people behind this conference still believes that spaces like this should be preserved and maintained. This 8th gathering marks the beginning of a new cycle. We know who we are, what is our identity and we will keep working to preserve that. We hope the publication containing the works of this year's edition will catch the same attention and interest of the previous editions and above all that this publication helps in some other's work. Finally, we would like to thank all authors for their work and interest in the conference, and to the scientific committee members for their review work.
Resumo:
A new iterative algorithm based on the inexact-restoration (IR) approach combined with the filter strategy to solve nonlinear constrained optimization problems is presented. The high level algorithm is suggested by Gonzaga et al. (SIAM J. Optim. 14:646–669, 2003) but not yet implement—the internal algorithms are not proposed. The filter, a new concept introduced by Fletcher and Leyffer (Math. Program. Ser. A 91:239–269, 2002), replaces the merit function avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. In the IR approach two independent phases are performed in each iteration, the feasibility and the optimality phases. The line search filter is combined with the first one phase to generate a “more feasible” point, and then it is used in the optimality phase to reach an “optimal” point. Numerical experiences with a collection of AMPL problems and a performance comparison with IPOPT are provided.
Resumo:
O desenvolvimento aplicacional é uma área em grande expansão no mercado das tecnologias de informação e como tal, é uma área que evolui rápido. Os impulsionadores para esta característica são as comunicações e os equipamentos informáticos, pois detêm características mais robustas e são cada vez mais rápidos. A função das aplicações é acompanhar esta evolução, possuindo arquiteturas mais complexas/completas visando suportar todos os pedidos dos clientes, através da produção de respostas em tempos aceitáveis. Esta dissertação aborda várias arquiteturas aplicacionais possíveis de implementar, mediante o contexto que esteja inserida, como por exemplo, um cenário com poucos ou muitos clientes, pouco ou muito capital para investir em servidores, etc. É fornecido um nivelamento acerca dos conceitos subjacentes ao desenvolvimento aplicacional. Posteriormente é analisado o estado de arte das linguagens de programação web e orientadas a objetos, bases de dados, frameworks em JavaScript, arquiteturas aplicacionais e, por fim, as abordagens para definir objetivos mensuráveis no desenvolvimento aplicacional. Foram implementados dois protótipos. Um deles, numa arquitetura multicamada com várias linguagens de programação e tecnologias. O segundo, numa única camada (monolítica) com uma única linguagem de programação. Os dois protótipos foram testados e comparados com o intuito de escolher uma das arquiteturas, num determinado cenário de utilização.
Resumo:
Os sistemas de informação integrados contribuem para a gestão eficiente das empresas, seja na organização e funcionamento internos ou nas relações externas. O mercado deste software é dominado pelas empresas que criam e distribuem sistemas proprietários. Existe uma alternativa, software livre, que disponibiliza aplicações em código aberto e maioritariamente de licença gratuita, que pode ser adaptado às necessidades das empresas. O objetivo do presente trabalho é avaliar a viabilidade de plataformas livres, de natureza vertical – OFBiz – e horizontal – Spring – como opção na escolha de um sistema de informação nas Pequenas e Médias Empresas portuguesas. Das áreas de negócio principais das organizações, foi selecionada a área de Recursos Humanos para efeitos de adaptação na aplicação OFBiz, com incidência em dois casos de uso: uma opção essencial, mas que atualmente não está prevista – Processamento de vencimentos – e outra já existente e que é avaliada em termos de necessidades de adaptação – Recrutamento. Sendo o idioma um requisito indispensável à internacionalização da aplicação, foi também analisada a sua implementação. A metodologia de investigação utilizada foi o Design Science Research, tendo sido implementado um protótipo para efeitos de teste e avaliação do projeto, com a elaboração de dois modelos: configuração e desenvolvimento. Implementado o protótipo, verificou-se que a framework vertical apresenta-se como uma alternativa mais viável do que a horizontal, pelas funcionalidades já existentes e que facilitam a adequação às necessidades de informação das Pequenas e Médias Empresas. A sua base tecnológica e de estrutura permite que a aplicação possa ser adaptada por técnicos especialistas das próprias empresas.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
This report describes a study based on a research and evaluation of platforms (JavaScript frameworks and libraries carried out under the discipline of Tese/Dissertação/Estágio in the Master Degree Computer Science - area of expertise in Arquitecturas Sistemas e Redes, in Instituto Superior de Engenharia do Porto (ISEP). The JavaScript language is increasingly present in the development of software and particularly software for the web environment. For this reason there is a need to investigate the issue, as the platforms that are emerging in the market have a very low level of maturity compared to existing, it is crucial to identify the main differences so that we can save time, labor and other costs in research and experimentation developers. This project aims to inform a range of platforms and existing JavaScript libraries on the market. For further investigation, a study of use cases used at this point in the market was held. To assist the work were used qualitative research methods through the existing content of research related to the area, and quantitative research methods. It was thus conducted a survey sent via email to fifty regular users of the language, platforms, and JavaScript libraries, linking the functional and non-functional characteristics of the platforms and existing use cases in the market, in order to realize what is the use of these tools in his professional life. This study allowed developers to access information that will be compared and evaluated in order to achieve a more precise evaluation of the various analyzed platforms. It was also made an evaluation of adaptation of the various platforms to the various use cases.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Química e Bioquímica
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.
Resumo:
Companies are increasingly more and more dependent on distributed web-based software systems to support their businesses. This increases the need to maintain and extend software systems with up-to-date new features. Thus, the development process to introduce new features usually needs to be swift and agile, and the supporting software evolution process needs to be safe, fast, and efficient. However, this is usually a difficult and challenging task for a developer due to the lack of support offered by programming environments, frameworks, and database management systems. Changes needed at the code level, database model, and the actual data contained in the database must be planned and developed together and executed in a synchronized way. Even under a careful development discipline, the impact of changing an application data model is hard to predict. The lifetime of an application comprises changes and updates designed and tested using data, which is usually far from the real, production, data. So, coding DDL and DML SQL scripts to update database schema and data, is the usual (and hard) approach taken by developers. Such manual approach is error prone and disconnected from the real data in production, because developers may not know the exact impact of their changes. This work aims to improve the maintenance process in the context of Agile Platform by Outsystems. Our goal is to design and implement new data-model evolution features that ensure a safe support for change and a sound migration process. Our solution includes impact analysis mechanisms targeting the data model and the data itself. This provides, to developers, a safe, simple, and guided evolution process.
Resumo:
Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.