983 resultados para C programming languages
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Neste trabalho, é implementada uma interface gráfica de usuários (GUI) usando a ferramenta Qt da Nokia (versão 3.0). A interface visa simplificar a criação de cenários para a realização de simulações paralelas usando a técnica numérica Local Nonorthogonal Finite Difference Time-Domain (LN-FDTD), aplicada para solucionar as equações de Maxwell. O simulador foi desenvolvido usando a linguagem de programação C e paralelizado utilizando threads. Para isto, a biblioteca pthread foi empregada. A visualização 3D do cenário a ser simulado (e da malha) é realizada por um programa especialmente desenvolvido que utiliza a biblioteca OpenGL. Para melhorar o desenvolvimento e alcançar os objetivos do projeto computacional, foram utilizados conceitos da Engenharia de Software, tais como o modelo de processo de software por prototipagem. Ao privar o usuário de interagir diretamente com o código-fonte da simulação, a probabilidade de ocorrência de erros humanos durante o processo de construção de cenários é minimizada. Para demonstrar o funcionamento da ferramenta desenvolvida, foi realizado um estudo relativo ao efeito de flechas em linhas de baixa tensão nas tensões transitórias induzidas nas mesmas por descargas atmosféricas. As tensões induzidas nas tomadas da edificação também são estudadas.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In a world that is increasingly working with software, the need arises for effective approaches that encourage software reuse. The reuse practice must be aligned to a set of practices, procedures and methodologies that create a stable and high quality product. These questions produce new styles and approaches in the software engineering. In this way, this thesis aims to address concepts related to development and model-driven architecture. The model-driven approach provides significant aspects of the automated development, which helps it with produced models built in the specification phase. The definition of terms such as model, architecture and platform makes the focus becomes clearer, because for MDA and MDD is important to split between technical and business issues. Important processes are covered, so you can highlight the artifacts that are built into each stage of model-driven development. The stages of development: CSM, PIM, PSM and ISM, detailing the purpose of each phase in oriented models, making the end of each stage are gradually produced artifacts that may be specialized. The models are handled by different prospects for modeling, abstracting the concepts and building a set of details that portrays a specific scenario. This retraction can be a graphical or textual representation, however, in most cases is chosen a language modeling, for example, UML. In order to provide a practical view, this dissertation shows some tools that improve the construction of models and the code generate that assists in the development, keeping the documentation systemic. Finally, the paper presents a case study that refers to the theoretical aspects discussed throughout the dissertation, therefore it is expected that the architecture and the model-driven development may be able to explain important features to consider in software engineering
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
In the last years, Intelligent Tutoring Systems have been a very successful way for improving learning experience. Many issues must be addressed until this technology can be defined mature. One of the main problems within the Intelligent Tutoring Systems is the process of contents authoring: knowledge acquisition and manipulation processes are difficult tasks because they require a specialised skills on computer programming and knowledge engineering. In this thesis we discuss a general framework for knowledge management in an Intelligent Tutoring System and propose a mechanism based on first order data mining to partially automate the process of knowledge acquisition that have to be used in the ITS during the tutoring process. Such a mechanism can be applied in Constraint Based Tutor and in the Pseudo-Cognitive Tutor. We design and implement a part of the proposed architecture, mainly the module of knowledge acquisition from examples based on first order data mining. We then show that the algorithm can be applied at least two different domains: first order algebra equation and some topics of C programming language. Finally we discuss the limitation of current approach and the possible improvements of the whole framework.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.
Resumo:
La presente tesi si pone come obiettivo quello di analizzare il protocollo LTP (in particolare in ION) e proporre dei miglioramenti utili al caso in cui siano presenti perdite elevate. Piu in dettaglio, una prima parte introduttiva motiva l'inefficacia del TCP/IP in ambito interplanetario e introduce l'architettura DTN Bundle Protocol (Cap.1). La tesi prosegue con la descrizione delle specifiche del protocollo LTP (Cap.2), in particolar modo evidenziando come un bundle venga incapsulato in un blocco LTP, come questo sia successivamente diviso in tanti segmenti LTP e come questi vengano successivamente inviati con il protocollo UDP o con un protocollo analogo. Viene quindi presentata un'approfondita analisi delle penalizzazioni dovute alle perdite dei segmenti LTP, sia di tipo dati che di segnalazione (Cap. 3). Quest'analisi permette di dimostrare la criticita degli effetti delle perdite, in particolare per quello che riguarda i segmenti LTP di segnalazione. Mentre in presenza di perdite basse tali effetti hanno in media un impatto minimo sul tempo di consegna di un blocco LTP (quindi del bundle in esso contenuto), in quanto avvengono raramente, in presenza di perdite elevate rappresentano un collo di bottiglia per il tempo di consegna di un blocco LTP. A tal proposito sono state proposte alcune modifiche che permettono di migliorare le prestazioni di LTP (Cap. 4) compatibilmente con le specifiche RFC in modo da garantire l'interoperabilita con le diverse implementazioni del protocollo. Successivamente nel Cap. 5 viene mostrato come sono state implementate le modifiche proposte in ION 3.4.1. Nel capitolo finale (Cap. 6) sono presenti i risultati numerici relativi ad alcuni test preliminari eseguiti confrontando la versione originale del protocollo con le versioni modificate contenenti i miglioramenti proposti. I test sono risultati molto positivi per elevate perdite, confermando cosi la validita dell'analisi e dei miglioramenti introdotti.
Resumo:
Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. Transactions in JEAs simplify the development of code that deals with failure recovery and multi-user coordination by guaranteeing atomicity of sets of operations. The heterogeneous nature of JEAs, however, can obfuscate conceptual errors in the application code, and in particular can hide incorrect declarations of transaction scope. In this paper we present a technique to expose and analyze the application transaction scope in JEAs by merging and analyzing information from multiple sources. We also present several novel visualizations that aid in the analysis of transaction scope by highlighting anomalies in the specification of transactions and violations of architectural constraints. We have validated our approach on two versions of a large commercial case study.