14 resultados para 671200 Computer Hardware and Electronic Equipment
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Design and Development of a Research Framework for Prototyping Control Tower Augmented Reality Tools
Resumo:
The purpose of the air traffic management system is to ensure the safe and efficient flow of air traffic. Therefore, while augmenting efficiency, throughput and capacity in airport operations, attention has rightly been placed on doing it in a safe manner. In the control tower, many advances in operational safety have come in the form of visualization tools for tower controllers. However, there is a paradox in developing such systems to increase controllers' situational awareness: by creating additional computer displays, the controller's vision is pulled away from the outside view and the time spent looking down at the monitors is increased. This reduces their situational awareness by forcing them to mentally and physically switch between the head-down equipment and the outside view. This research is based on the idea that augmented reality may be able to address this issue. The augmented reality concept has become increasingly popular over the past decade and is being proficiently used in many fields, such as entertainment, cultural heritage, aviation, military & defense. This know-how could be transferred to air traffic control with a relatively low effort and substantial benefits for controllers’ situation awareness. Research on this topic is consistent with SESAR objectives of increasing air traffic controllers’ situation awareness and enable up to 10 % of additional flights at congested airports while still increasing safety and efficiency. During the Ph.D., a research framework for prototyping augmented reality tools was set up. This framework consists of methodological tools for designing the augmented reality overlays, as well as of hardware and software equipment to test them. Several overlays have been designed and implemented in a simulated tower environment, which is a virtual reconstruction of Bologna airport control tower. The positive impact of such tools was preliminary assessed by means of the proposed methodology.
Resumo:
ALICE, that is an experiment held at CERN using the LHC, is specialized in analyzing lead-ion collisions. ALICE will study the properties of quarkgluon plasma, a state of matter where quarks and gluons, under conditions of very high temperatures and densities, are no longer confined inside hadrons. Such a state of matter probably existed just after the Big Bang, before particles such as protons and neutrons were formed. The SDD detector, one of the ALICE subdetectors, is part of the ITS that is composed by 6 cylindrical layers with the innermost one attached to the beam pipe. The ITS tracks and identifies particles near the interaction point, it also aligns the tracks of the articles detected by more external detectors. The two ITS middle layers contain the whole 260 SDD detectors. A multichannel readout board, called CARLOSrx, receives at the same time the data coming from 12 SDD detectors. In total there are 24 CARLOSrx boards needed to read data coming from all the SDD modules (detector plus front end electronics). CARLOSrx packs data coming from the front end electronics through optical link connections, it stores them in a large data FIFO and then it sends them to the DAQ system. Each CARLOSrx is composed by two boards. One is called CARLOSrx data, that reads data coming from the SDD detectors and configures the FEE; the other one is called CARLOSrx clock, that sends the clock signal to all the FEE. This thesis contains a description of the hardware design and firmware features of both CARLOSrx data and CARLOSrx clock boards, which deal with all the SDD readout chain. A description of the software tools necessary to test and configure the front end electronics will be presented at the end of the thesis.
Resumo:
Organic semiconductors have great promise in the field of electronics due to their low cost in term of fabrication on large areas and their versatility to new devices, for these reasons they are becoming a great chance in the actual technologic scenery. Some of the most important open issues related to these materials are the effects of surfaces and interfaces between semiconductor and metals, the changes caused by different deposition methods and temperature, the difficulty related to the charge transport modeling and finally a fast aging with time, bias, air and light, that can change the properties very easily. In order to find out some important features of organic semiconductors I fabricated Organic Field Effect Transistors (OFETs), using them as characterization tools. The focus of my research is to investigate the effects of ion implantation on organic semiconductors and on OFETs. Ion implantation is a technique widely used on inorganic semiconductors to modify their electrical properties through the controlled introduction of foreign atomic species in the semiconductor matrix. I pointed my attention on three major novel and interesting effects, that I observed for the first time following ion implantation of OFETs: 1) modification of the electrical conductivity; 2) introduction of stable charged species, electrically active with organic thin films; 3) stabilization of transport parameters (mobility and threshold voltage). I examined 3 different semiconductors: Pentacene, a small molecule constituted by 5 aromatic rings, Pentacene-TIPS, a more complex by-product of the first one, and finally an organic material called Pedot PSS, that belongs to the branch of the conductive polymers. My research started with the analysis of ion implantation of Pentacene films and Pentacene OFETs. Then, I studied totally inkjet printed OFETs made of Pentacene-TIPS or PEDOT-PSS, and the research will continue with the ion implantation on these promising organic devices.
Resumo:
This thesis concerns the study of complex conformational surfaces and tautomeric equilibria of molecules and molecular complexes by quantum chemical methods and rotational spectroscopy techniques. In particular, the focus of this research is on the effects of substitution and noncovalent interactions in determining the energies and geometries of different conformers, tautomers or molecular complexes. The Free-Jet Absorption Millimeter Wave spectroscopy and the Pulsed-Jet Fourier Transform Microwave spectroscopy have been applied to perform these studies and the obtained results showcase the suitability of these techniques for the study of conformational surfaces and intermolecular interactions. The series of investigations of selected medium-size molecules and complexes have shown how different instrumental setups can be used to obtain a variety of results on molecular properties. The systems studied, include molecules of biological interest such as anethole and molecules of astrophysical interest such as N-methylaminoethanol. Moreover halogenation effects have been investigated on halogen substituted tautomeric systems (5-chlorohydroxypyridine and 6-chlorohydroxypyridine), where it has shown that the position of the inserted halogen atom affects the prototropic equilibrium. As for fluorination effects, interesting results have been achieved investigating some small complexes where a molecule of water is used as a probe to reveal the changes on the electrostatic potential of different fluorinated compounds: 2-fluoropyridine, 3-fluoropyridine and penta-fluoropyridine. While in the case of the molecular complex between water and 2-fluoropyridine and 3-fluoropyridine the geometry of the complex with one water molecule is analogous to that of pyridine with the water molecule linked to the pyridine nitrogen, the case of pentafluoropyridine reveals the effect of perfluorination and the water oxygen points towards the positive center of the pyridine ring. Additional molecular adducts with a molecule of water have been analyzed (benzylamine-water and acrylic acid-water) in order to reveal the stabilizing driving forces that characterize these complexes.
Resumo:
In the last decades, nanomaterials, and in particular semiconducting nanoparticles (or quantum dots), have gained increasing attention due to their controllable optical properties and potential applications. Silicon nanoparticles (also called silicon nanocrystals, SiNCs) have been extensively studied in the last years, due to their physical and chemical properties which render them a valid alternative to conventional quantum dots. During my PhD studies I have planned new synthetical routes to obtain SiNCs functionalised with molecules which could ameliorate the properties of the nanoparticle. However, this was certainly challenging, because SiNCs are very susceptible to many reagents and conditions that are often used in organic synthesis. They can be irreversibly quenched in the presence of alkalis, they can be damaged in the presence of oxidants, they can modify their optical properties in the presence of many nitrogen-containing compounds, metal complexes or simple organic molecules. If their surface is not well-passivated, the oxygen can introduce defect states, or they can aggregate and precipitate in several solvents. Therefore, I was able to functionalise SiNCs with different ligands: chromophores, amines, carboxylic acids, poly(ethylene)glycol, even ameliorating functionalisation strategies that already existed. This thesis will collect the experimental procedures used to synthesize silicon nanocrystals, the strategies adopted to functionalise effectively the nanoparticle with different types of organic molecules, and the characterisation of their surface, physical properties and luminescence (mostly photogenerated, but also electrochemigenerated). I also spent a period of 7 months in Leeds (UK), where I managed to learn how to synthesize other cadmium-free quantum dots made of copper, indium and sulphur (CIS QDs). During my last year of PhD, I focused on their functionalisation by ligand exchange techniques, yielding the first example of light-harvesting antenna based on those quantum dots. Part of this thesis is dedicated to them.
Resumo:
Waste prevention (WP) is a strategy which helps societies and individuals to strive for sufficiency in resource consumption within planetary boundaries alongside sustainable and equitable well-being and to decouple the concepts of well-being and life satisfaction from materialism. Within this dissertation, some instruments to promote WP are analysed, by adopting two perspectives: firstly, the one of policymakers, at different governance levels, and secondly, the one of business in the electrical and electronic equipment (EEE) sector. At a national level, the role of WP programmes and market-based instruments (extended producer responsibility, pay-as-you-throw schemes, deposit-refund systems, environmental taxes) in boosting prevention of municipal solid waste is investigated. Then, focusing on the Emilia-Romagna Region (Italy), the performances of the waste management system are assessed over a long period, including some years before and after an institutional reform of the waste management governance regime. The impact of a centralisation (at a regional level) of both planning and economic regulation of the waste services on waste generation and WP is analysed. Finally, to support the regional decision-makers in the prioritisation of publicly funded projects for WP, a framework for the sustainability assessment, the evaluation of success, and the prioritisation of WP measures was applied to some projects implemented by Municipalities in the Region. Trying to close the research gap between engineering and business, WP strategies are discussed as drivers for business model (BM) innovation in EEE sector. Firstly, an innovative approach to a digital tracking solution for professional EEE management is analysed. New BMs which facilitate repair, reuse, remanufacturing, and recycling are created and discussed. Secondly, the impact of BMs based on servitisation and on producer ownership on the extension of equipment lifetime is analysed, by performing a review of real cases of organizations in the EEE sector applying result- and use-oriented BMs.
Resumo:
The world of Computational Biology and Bioinformatics presently integrates many different expertise, including computer science and electronic engineering. A major aim in Data Science is the development and tuning of specific computational approaches to interpret the complexity of Biology. Molecular biologists and medical doctors heavily rely on an interdisciplinary expert capable of understanding the biological background to apply algorithms for finding optimal solutions to their problems. With this problem-solving orientation, I was involved in two basic research fields: Cancer Genomics and Enzyme Proteomics. For this reason, what I developed and implemented can be considered a general effort to help data analysis both in Cancer Genomics and in Enzyme Proteomics, focusing on enzymes which catalyse all the biochemical reactions in cells. Specifically, as to Cancer Genomics I contributed to the characterization of intratumoral immune microenvironment in gastrointestinal stromal tumours (GISTs) correlating immune cell population levels with tumour subtypes. I was involved in the setup of strategies for the evaluation and standardization of different approaches for fusion transcript detection in sarcomas that can be applied in routine diagnostic. This was part of a coordinated effort of the Sarcoma working group of "Alleanza Contro il Cancro". As to Enzyme Proteomics, I generated a derived database collecting all the human proteins and enzymes which are known to be associated to genetic disease. I curated the data search in freely available databases such as PDB, UniProt, Humsavar, Clinvar and I was responsible of searching, updating, and handling the information content, and computing statistics. I also developed a web server, BENZ, which allows researchers to annotate an enzyme sequence with the corresponding Enzyme Commission number, the important feature fully describing the catalysed reaction. More to this, I greatly contributed to the characterization of the enzyme-genetic disease association, for a better classification of the metabolic genetic diseases.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
The present Thesis reports on the various research projects to which I have contributed during my PhD period, working with several research groups, and whose results have been communicated in a number of scientific publications. The main focus of my research activity was to learn, test, exploit and extend the recently developed vdW-DFT (van der Waals corrected Density Functional Theory) methods for computing the structural, vibrational and electronic properties of ordered molecular crystals from first principles. A secondary, and more recent, research activity has been the analysis with microelectrostatic methods of Molecular Dynamics (MD) simulations of disordered molecular systems. While only very unreliable methods based on empirical models were practically usable until a few years ago, accurate calculations of the crystal energy are now possible, thanks to very fast modern computers and to the excellent performance of the best vdW-DFT methods. Accurate energies are particularly important for describing organic molecular solids, since they often exhibit several alternative crystal structures (polymorphs), with very different packing arrangements but very small energy differences. Standard DFT methods do not describe the long-range electron correlations which give rise to the vdW interactions. Although weak, these interactions are extremely sensitive to the packing arrangement, and neglecting them used to be a problem. The calculations of reliable crystal structures and vibrational frequencies has been made possible only recently, thanks to development of some good representations of the vdW contribution to the energy (known as “vdW corrections”).
Resumo:
Modern scientific discoveries are driven by an unsatisfiable demand for computational resources. High-Performance Computing (HPC) systems are an aggregation of computing power to deliver considerably higher performance than one typical desktop computer can provide, to solve large problems in science, engineering, or business. An HPC room in the datacenter is a complex controlled environment that hosts thousands of computing nodes that consume electrical power in the range of megawatts, which gets completely transformed into heat. Although a datacenter contains sophisticated cooling systems, our studies indicate quantitative evidence of thermal bottlenecks in real-life production workload, showing the presence of significant spatial and temporal thermal and power heterogeneity. Therefore minor thermal issues/anomalies can potentially start a chain of events that leads to an unbalance between the amount of heat generated by the computing nodes and the heat removed by the cooling system originating thermal hazards. Although thermal anomalies are rare events, anomaly detection/prediction in time is vital to avoid IT and facility equipment damage and outage of the datacenter, with severe societal and business losses. For this reason, automated approaches to detect thermal anomalies in datacenters have considerable potential. This thesis analyzed and characterized the power and thermal characteristics of a Tier0 datacenter (CINECA) during production and under abnormal thermal conditions. Then, a Deep Learning (DL)-powered thermal hazard prediction framework is proposed. The proposed models are validated against real thermal hazard events reported for the studied HPC cluster while in production. This thesis is the first empirical study of thermal anomaly detection and prediction techniques of a real large-scale HPC system to the best of my knowledge. For this thesis, I used a large-scale dataset, monitoring data of tens of thousands of sensors for around 24 months with a data collection rate of around 20 seconds.
Resumo:
Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.