9 resultados para Computer software--Development

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ALICE, that is an experiment held at CERN using the LHC, is specialized in analyzing lead-ion collisions. ALICE will study the properties of quarkgluon plasma, a state of matter where quarks and gluons, under conditions of very high temperatures and densities, are no longer confined inside hadrons. Such a state of matter probably existed just after the Big Bang, before particles such as protons and neutrons were formed. The SDD detector, one of the ALICE subdetectors, is part of the ITS that is composed by 6 cylindrical layers with the innermost one attached to the beam pipe. The ITS tracks and identifies particles near the interaction point, it also aligns the tracks of the articles detected by more external detectors. The two ITS middle layers contain the whole 260 SDD detectors. A multichannel readout board, called CARLOSrx, receives at the same time the data coming from 12 SDD detectors. In total there are 24 CARLOSrx boards needed to read data coming from all the SDD modules (detector plus front end electronics). CARLOSrx packs data coming from the front end electronics through optical link connections, it stores them in a large data FIFO and then it sends them to the DAQ system. Each CARLOSrx is composed by two boards. One is called CARLOSrx data, that reads data coming from the SDD detectors and configures the FEE; the other one is called CARLOSrx clock, that sends the clock signal to all the FEE. This thesis contains a description of the hardware design and firmware features of both CARLOSrx data and CARLOSrx clock boards, which deal with all the SDD readout chain. A description of the software tools necessary to test and configure the front end electronics will be presented at the end of the thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Structural Health Monitoring (SHM) research area is increasingly investigated due to its high potential in reducing the maintenance costs and in ensuring the systems safety in several industrial application fields. A growing demand of new SHM systems, permanently embedded into the structures, for savings in weight and cabling, comes from the aeronautical and aerospace application fields. As consequence, the embedded electronic devices are to be wirelessly connected and battery powered. As result, a low power consumption is requested. At the same time, high performance in defects or impacts detection and localization are to be ensured to assess the structural integrity. To achieve these goals, the design paradigms can be changed together with the associate signal processing. The present thesis proposes design strategies and unconventional solutions, suitable both for real-time monitoring and periodic inspections, relying on piezo-transducers and Ultrasonic Guided Waves. In the first context, arrays of closely located sensors were designed, according to appropriate optimality criteria, by exploiting sensors re-shaping and optimal positioning, to achieve improved damages/impacts localisation performance in noisy environments. An additional sensor re-shaping procedure was developed to tackle another well-known issue which arises in realistic scenario, namely the reverberation. A novel sensor, able to filter undesired mechanical boundaries reflections, was validated via simulations based on the Green's functions formalism and FEM. In the active SHM context, a novel design methodology was used to develop a single transducer, called Spectrum-Scanning Acoustic Transducer, to actively inspect a structure. It can estimate the number of defects and their distances with an accuracy of 2[cm]. It can also estimate the damage angular coordinate with an equivalent mainlobe aperture of 8[deg], when a 24[cm] radial gap between two defects is ensured. A suitable signal processing was developed in order to limit the computational cost, allowing its use with embedded electronic devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This PhD thesis presents the results, achieved at the Aerospace Engineering Department Laboratories of the University of Bologna, concerning the development of a small scale Rotary wing UAVs (RUAVs). In the first part of the work, a mission simulation environment for rotary wing UAVs was developed, as main outcome of the University of Bologna partnership in the CAPECON program (an EU funded research program aimed at studying the UAVs civil applications and economic effectiveness of the potential configuration solutions). The results achieved in cooperation with DLR (German Aerospace Centre) and with an helicopter industrial partners will be described. In the second part of the work, the set-up of a real small scale rotary wing platform was performed. The work was carried out following a series of subsequent logical steps from hardware selection and set-up to final autonomous flight tests. This thesis will focus mainly on the RUAV avionics package set-up, on the onboard software development and final experimental tests. The setup of the electronic package allowed recording of helicopter responses to pilot commands and provided deep insight into the small scale rotorcraft dynamics, facilitating the development of helicopter models and control systems in a Hardware In the Loop (HIL) simulator. A neested PI velocity controller1 was implemented on the onboard computer and autonomous flight tests were performed. Comparison between HIL simulation and experimental results showed good agreement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Limiti sempre più stringenti sulle emissioni inquinanti ed una maggiore attenzione ai consumi, all'incremento di prestazioni e alla guidabilità, portano allo sviluppo di algoritmi di controllo motore sempre più complicati. Allo stesso tempo, l'unità di propulsione sta diventando un insieme sempre più variegato di sottosistemi che devono lavorare all'unisono. L'ingegnere calibratore si trova di fronte ad una moltitudine di variabili ed algoritmi che devono essere calibrati e testati e necessita di strumenti che lo aiutino ad analizzare il comportamento del motore fornendo risultati sintetici e facilmente accessibili. Nel seguente lavoro è riportato lo sviluppo di un sistema di analisi della combustione: l'obbiettivo è stato quello di sviluppare un software che fornisca le migliori soluzioni per l'analisi di un motore a combustione interna, in termini di accuratezza dei risultati, varietà di calcoli messi a disposizione, facilità di utilizzo ed integrazione con altri sistemi tramite la condivisione dei risultati calcolati in tempo reale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Internet of Things systems are pervasive systems evolved from cyber-physical to large-scale systems. Due to the number of technologies involved, software development involves several integration challenges. Among them, the ones preventing proper integration are those related to the system heterogeneity, and thus addressing interoperability issues. From a software engineering perspective, developers mostly experience the lack of interoperability in the two phases of software development: programming and deployment. On the one hand, modern software tends to be distributed in several components, each adopting its most-appropriate technology stack, pushing programmers to code in a protocol- and data-agnostic way. On the other hand, each software component should run in the most appropriate execution environment and, as a result, system architects strive to automate the deployment in distributed infrastructures. This dissertation aims to improve the development process by introducing proper tools to handle certain aspects of the system heterogeneity. Our effort focuses on three of these aspects and, for each one of those, we propose a tool addressing the underlying challenge. The first tool aims to handle heterogeneity at the transport and application protocol level, the second to manage different data formats, while the third to obtain optimal deployment. To realize the tools, we adopted a linguistic approach, i.e.\ we provided specific linguistic abstractions that help developers to increase the expressive power of the programming language they use, writing better solutions in more straightforward ways. To validate the approach, we implemented use cases to show that the tools can be used in practice and that they help to achieve the expected level of interoperability. In conclusion, to move a step towards the realization of an integrated Internet of Things ecosystem, we target programmers and architects and propose them to use the presented tools to ease the software development process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.