977 resultados para Software architecture document
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.
Resumo:
The ability to use Software Defined Radio (SDR) in the civilian mobile applications will make it possible for the next generation of mobile devices to handle multi-standard personal wireless devices and ubiquitous wireless devices. The original military standard created many beneficial characteristics for SDR, but resulted in a number of disadvantages as well. Many challenges in commercializing SDR are still the subject of interest in the software radio research community. Four main issues that have been already addressed are performance, size, weight, and power. This investigation presents an in-depth study of SDR inter-components communications in terms of total link delay related to the number of components and packet sizes in systems based on Software Communication Architecture (SCA). The study is based on the investigation of the controlled environment platform. Results suggest that the total link delay does not linearly increase with the number of components and the packet sizes. The closed form expression of the delay was modeled using a logistic function in terms of the number of components and packet sizes. The model performed well when the number of components was large. Based upon the mobility applications, energy consumption has become one of the most crucial limitations. SDR will not only provide flexibility of multi-protocol support, but this desirable feature will also bring a choice of mobile protocols. Having such a variety of choices available creates a problem in the selection of the most appropriate protocol to transmit. An investigation in a real-time algorithm to optimize energy efficiency was also performed. Communication energy models were used including switching estimation to develop a waveform selection algorithm. Simulations were performed to validate the concept.
Resumo:
O CERN - a Organização Europeia para a Investigação Nuclear - é um dos maiores centros de investigação a nível mundial, responsável por diversas descobertas na área da física bem como na área das ciências da computação. O CERN Document Server, também conhecido como CDS Invenio, é um software desenvolvido no CERN, que tem como objectivo fornecer um conjunto de ferramentas para gerir bibliotecas digitais. A fim de melhorar as funcionalidades do CDS Invenio foi criado um novo módulo, chamado BibCirculation, para gerir os livros (e outros itens) da biblioteca do CERN, funcionando como um sistema integrado de gestão de bibliotecas. Esta tese descreve os passos que foram dados para atingir os vários objectivos deste projecto, explicando, entre outros, o processo de integração com os outros módulos existentes bem como a forma encontrada para associar informações dos livros com os metadados do CDS lnvenio. É também possível encontrar uma apresentação detalhada sobre todo o processo de implementação e os testes realizados. Finalmente, são apresentadas as conclusões deste projecto e o trabalho a desenvolver futuramente. ABSTRACT: CERN - The European Organization for Nuclear Research - is one of the largest research centers worldwide, responsible for several discoveries in physics as well as in computer science. The CERN Document Server, also known as CDS Invenio, is a software developed at CERN, which aims to provide a set of tools for managing digital libraries. ln order to improve the functionalities of CDS Invenio a new module was developed, called BibCirculation, to manage books (and other items) from the CERN library, and working as an Integrated Library System. This thesis shows the steps that have been done to achieve the several goals of this project, explaining, among others aspects, the process of integration with other existing modules as well as the way to associate the information about books with the metadata from CDS lnvenio. You can also find detailed explanation of the entire implementation process and testing. Finally, there are presented the conclusions of this project and ideas for future development.
Resumo:
Nuclear cross sections are the pillars onto which the transport simulation of particles and radiations is built on. Since the nuclear data libraries production chain is extremely complex and made of different steps, it is mandatory to foresee stringent verification and validation procedures to be applied to it. The work here presented has been focused on the development of a new python based software called JADE, whose objective is to give a significant help in increasing the level of automation and standardization of these procedures in order to reduce the time passing between new libraries releases and, at the same time, increasing their quality. After an introduction to nuclear fusion (which is the field where the majority of the V\&V action was concentrated for the time being) and to the simulation of particles and radiations transport, the motivations leading to JADE development are discussed. Subsequently, the code general architecture and the implemented benchmarks (both experimental and computational) are described. After that, the results coming from the major application of JADE during the research years are presented. At last, after a final discussion on the objective reached by JADE, the possible brief, mid and long time developments for the project are discussed.
Resumo:
The aim of this thesis is to present a new approach to document classification using verb-object pairs. We explore one possible strategy that uses the presence of relevant verb-object pairs in documents as features and a Naive Bayes classifier as a classifier on which the model is trained. Then, we assess the results from the case study which uses a software based on the strategy and make conclusions.
Resumo:
In the metal industry, and more specifically in the forging one, scrap material is a crucial issue and reducing it would be an important goal to reach. Not only would this help the companies to be more environmentally friendly and more sustainable, but it also would reduce the use of energy and lower costs. At the same time, the techniques for Industry 4.0 and the advancements in Artificial Intelligence (AI), especially in the field of Deep Reinforcement Learning (DRL), may have an important role in helping to achieve this objective. This document presents the thesis work, a contribution to the SmartForge project, that was performed during a semester abroad at Karlstad University (Sweden). This project aims at solving the aforementioned problem with a business case of the company Bharat Forge Kilsta, located in Karlskoga (Sweden). The thesis work includes the design and later development of an event-driven architecture with microservices, to support the processing of data coming from sensors set up in the company's industrial plant, and eventually the implementation of an algorithm with DRL techniques to control the electrical power to use in it.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.
Resumo:
The BP (Bundle Protocol) version 7 has been recently standardized by IETF in RFC 9171, but it is the whole DTN (Delay-/Disruption-Tolerant Networking) architecture, of which BP is the core, that is gaining a renewed interest, thanks to its planned adoption in future space missions. This is obviously positive, but at the same time it seems to make space agencies more interested in deployment than in research, with new BP implementations that may challenge the central role played until now by the historical BP reference implementations, such as ION and DTNME. To make Unibo research on DTN independent of space agency decisions, the development of an internal BP implementation was in order. This is the goal of this thesis, which deals with the design and implementation of Unibo-BP: a novel, research-driven BP implementation, to be released as Free Software. Unibo-BP is fully compliant with RFC 9171, as demonstrated by a series of interoperability tests with ION and DTNME, and presents a few innovations, such as the ability to manage remote DTN nodes by means of the BP itself. Unibo-BP is compatible with pre-existing Unibo implementations of CGR (Contact Graph Routing) and LTP (Licklider Transmission Protocol) thanks to interfaces designed during the thesis. The thesis project also includes an implementation of TCPCLv3 (TCP Convergence Layer version 3, RFC 7242), which can be used as an alternative to LTPCL to connect with proximate nodes, especially in terrestrial networks. Summarizing, Unibo-BP is at the heart of a larger project, Unibo-DTN, which aims to implement the main components of a complete DTN stack (BP, TCPCL, LTP, CGR). Moreover, Unibo-BP is compatible with all DTNsuite applications, thanks to an extension of the Unified API library on which DTNsuite applications are based. The hope is that Unibo-BP and all the ancillary programs developed during this thesis will contribute to the growth of DTN popularity in academia and among space agencies.
Resumo:
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física