32 resultados para Domain-specific programming languages
Resumo:
Article in Press, Corrected Proof
Resumo:
O desenvolvimento aplicacional é uma área em grande expansão no mercado das tecnologias de informação e como tal, é uma área que evolui rápido. Os impulsionadores para esta característica são as comunicações e os equipamentos informáticos, pois detêm características mais robustas e são cada vez mais rápidos. A função das aplicações é acompanhar esta evolução, possuindo arquiteturas mais complexas/completas visando suportar todos os pedidos dos clientes, através da produção de respostas em tempos aceitáveis. Esta dissertação aborda várias arquiteturas aplicacionais possíveis de implementar, mediante o contexto que esteja inserida, como por exemplo, um cenário com poucos ou muitos clientes, pouco ou muito capital para investir em servidores, etc. É fornecido um nivelamento acerca dos conceitos subjacentes ao desenvolvimento aplicacional. Posteriormente é analisado o estado de arte das linguagens de programação web e orientadas a objetos, bases de dados, frameworks em JavaScript, arquiteturas aplicacionais e, por fim, as abordagens para definir objetivos mensuráveis no desenvolvimento aplicacional. Foram implementados dois protótipos. Um deles, numa arquitetura multicamada com várias linguagens de programação e tecnologias. O segundo, numa única camada (monolítica) com uma única linguagem de programação. Os dois protótipos foram testados e comparados com o intuito de escolher uma das arquiteturas, num determinado cenário de utilização.
Resumo:
This work is a contribution to the e-Framework, arguably the most prominent e-learning framework today, and consists of the definition of a service for the automatic evaluation of programming exercises. This evaluation domain differs from trivial evaluations modelled by languages such as the IMS Question & Test Interoperability (QTI) specification. Complex evaluation domains justify the development of specialized evaluators that participate in several business processes. These business processes can combine other type of systems such as Programming Contest Management Systems, Learning Management Systems, Integrated Development Environments and Learning Object Repositories where programming exercises are stored as Learning Objects. This contribution describes the implementation approaches used, more precisely, behaviours & requests, use & interactions, applicable standards, interface definition and usage scenarios.
Resumo:
Este trabalho é uma parte do tema global “Suporte à Computação Paralela e Distribuída em Java”, também tema da tese de Daniel Barciela no mestrado de Engenharia Informática do Instituto Superior de Engenharia do Porto. O seu objetivo principal consiste na definição/criação da interface com o programador, assim como também abrange a forma como os nós comunicam e cooperam entre si para a execução de determinadas tarefas, de modo a atingirem um único objetivo global. No âmbito desta dissertação foi realizado um estudo prévio relativamente aos modelos teóricos referentes à computação paralela, assim como também foram analisadas linguagens e frameworks que fornecem suporte a este mesmo tipo de computação. Este estudo teve como principal objetivo a análise da forma como estes modelos e linguagens permitem ao programador expressar o processamento paralelo no desenvolvimento das aplicações. Como resultado desta dissertação surgiu a framework denominada Distributed Parallel Framework for Java (DPF4j), cujo objetivo principal é fornecer aos programadores o suporte para o desenvolvimento de aplicações paralelas e distribuídas. Esta framework foi desenvolvida na linguagem Java. Esta dissertação contempla a parte referente à interface de programação e a toda a comunicação entre nós cooperantes da framework DPF4j. Por fim, foi demonstrado através dos testes realizados que a DPF4j, apesar de ser ainda um protótipo, já demonstra ter uma performance superior a outras frameworks e linguagens que possuem os mesmos objetivos.
Resumo:
Managing programming exercises require several heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. These tools would be too specific to incorporate in an e-Learning platform. Even if they could be provided as pluggable components, the burden of maintaining them would be prohibitive to institutions with few courses in those domains. This work presents a standard based approach for the coordination of a network of e-Learning systems participating on the automatic evaluation of programming exercises. The proposed approach uses a pivot component to orchestrate the interaction among all the systems using communication standards. This approach was validated through its effective use on classroom and we present some preliminary results.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding he management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
The marriage of emerging information technologies with control technologies is a major driving force that, in the context of the factory-floor, is creating an enormous eagerness for extending the capabilities of currently available fieldbus networks to cover functionalities not considered up to a recent past. Providing wireless capabilities to such type of communication networks is a big share of that effort. The RFieldbus European project is just one example, where PROFIBUS was provided with suitable extensions for implementing hybrid wired/wireless communication systems. In RFieldbus, interoperability between wired and wireless components is achieved by the use specific intermediate networking systems operating as repeaters, thus creating a single logical ring (SLR) network. The main advantage of the SLR approach is that the effort for protocol extensions is not significant. However, a multiple logical ring (MLR) approach provides traffic and error isolation between different network segments. This concept was introduced in, where an approach for a bridge-based architecture was briefly outlined. This paper will focus on the details of the inter-Domain Protocol (IDP), which is responsible for handling transactions between different network domains (wired or wireless) running the PROFIBUS protocol.
Resumo:
Embedded real-time applications increasingly present high computation requirements, which need to be completed within specific deadlines, but that present highly variable patterns, depending on the set of data available in a determined instant. The current trend to provide parallel processing in the embedded domain allows providing higher processing power; however, it does not address the variability in the processing pattern. Dimensioning each device for its worst-case scenario implies lower average utilization, and increased available, but unusable, processing in the overall system. A solution for this problem is to extend the parallel execution of the applications, allowing networked nodes to distribute the workload, on peak situations, to neighbour nodes. In this context, this report proposes a framework to develop parallel and distributed real-time embedded applications, transparently using OpenMP and Message Passing Interface (MPI), within a programming model based on OpenMP. The technical report also devises an integrated timing model, which enables the structured reasoning on the timing behaviour of these hybrid architectures.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
The present generation of eLearning platforms values the interchange of learning objects standards. Nevertheless, for specialized domains these standards are insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. To address this issue we extended an existing learning objects standard to the particular requirements of a specialized domain, namely the automatic evaluation of programming problems. The focus of this paper is the definition of programming problems as learning objects. We introduce a new schema to represent metadata related to automatic evaluation that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements of the evaluation engine; or the roles of different assets - tests cases, program solutions, etc. This new schema is being used in an interoperable repository of learning objects, called crimsonHex.
Resumo:
Standards for learning objects focus primarily on content presentation. They were already extended to support automatic evaluation but it is limited to exercises with a predefined set of answers. The existing standards lack the metadata required by specialized evaluators to handle types of exercises with an indefinite set of solutions. To address this issue we extended existing learning object standards to the particular requirements of a specialized domain. We present a definition of programming problems as learning objects that is compatible both with Learning Management Systems and with systems performing automatic evaluation of programs. The proposed definition includes metadata that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements of the valuation engine; and the roles of different assets - tests cases, program solutions, etc. We present also the EduJudge project and its main services as a case study on the use of the proposed definition of programming problems as learning objects.
Resumo:
Several standards appeared in recent years to formalize the metadata of learning objects, but they are still insufficient to fully describe a specialized domain. In particular, the programming exercise domain requires interdependent resources (e.g. test cases, solution programs, exercise description) usually processed by different services in the programming exercise life-cycle. Moreover, the manual creation of these resources is time-consuming and error-prone leading to what is an obstacle to the fast development of programming exercises of good quality. This paper focuses on the definition of an XML dialect called PExIL (Programming Exercises Interoperability Language). The aim of PExIL is to consolidate all the data required in the programming exercise life-cycle, from when it is created to when it is graded, covering also the resolution, the evaluation and the feedback. We introduce the XML Schema used to formalize the relevant data of the programming exercise life-cycle. The validation of this approach is made through the evaluation of the usefulness and expressiveness of the PExIL definition. In the former we present the tools that consume the PExIL definition to automatically generate the specialized resources. In the latter we use the PExIL definition to capture all the constraints of a set of programming exercises stored in a learning objects repository.
Resumo:
Several standards have appeared in recent years to formalize the metadata of learning objects, but they are still insufficient to fully describe a specialized domain. In particular, the programming exercise domain requires interdependent resources (e.g. test cases, solution programs, exercise description) usually processed by different services in the programming exercise lifecycle. Moreover, the manual creation of these resources is time-consuming and error-prone, leading to an obstacle to the fast development of programming exercises of good quality. This chapter focuses on the definition of an XML dialect called PExIL (Programming Exercises Interoperability Language). The aim of PExIL is to consolidate all the data required in the programming exercise lifecycle from when it is created to when it is graded, covering also the resolution, the evaluation, and the feedback. The authors introduce the XML Schema used to formalize the relevant data of the programming exercise lifecycle. The validation of this approach is made through the evaluation of the usefulness and expressiveness of the PExIL definition. In the former, the authors present the tools that consume the PExIL definition to automatically generate the specialized resources. In the latter, they use the PExIL definition to capture all the constraints of a set of programming exercises stored in a learning objects repository.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Tecnologias do Conhecimento e Decisão