13 resultados para hardware design

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

70.00% 70.00%

Publicador:

Resumo:

ALICE, that is an experiment held at CERN using the LHC, is specialized in analyzing lead-ion collisions. ALICE will study the properties of quarkgluon plasma, a state of matter where quarks and gluons, under conditions of very high temperatures and densities, are no longer confined inside hadrons. Such a state of matter probably existed just after the Big Bang, before particles such as protons and neutrons were formed. The SDD detector, one of the ALICE subdetectors, is part of the ITS that is composed by 6 cylindrical layers with the innermost one attached to the beam pipe. The ITS tracks and identifies particles near the interaction point, it also aligns the tracks of the articles detected by more external detectors. The two ITS middle layers contain the whole 260 SDD detectors. A multichannel readout board, called CARLOSrx, receives at the same time the data coming from 12 SDD detectors. In total there are 24 CARLOSrx boards needed to read data coming from all the SDD modules (detector plus front end electronics). CARLOSrx packs data coming from the front end electronics through optical link connections, it stores them in a large data FIFO and then it sends them to the DAQ system. Each CARLOSrx is composed by two boards. One is called CARLOSrx data, that reads data coming from the SDD detectors and configures the FEE; the other one is called CARLOSrx clock, that sends the clock signal to all the FEE. This thesis contains a description of the hardware design and firmware features of both CARLOSrx data and CARLOSrx clock boards, which deal with all the SDD readout chain. A description of the software tools necessary to test and configure the front end electronics will be presented at the end of the thesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This Thesis wants to highlight the importance of ad-hoc designed and developed embedded systems in the implementation of intelligent sensor networks. As evidence four areas of application are presented: Precision Agriculture, Bioengineering, Automotive and Structural Health Monitoring. For each field is reported one, or more, smart device design and developing, in addition to on-board elaborations, experimental validation and in field tests. In particular, it is presented the design and development of a fruit meter. In the bioengineering field, three different projects are reported, detailing the architectures implemented and the validation tests conducted. Two prototype realizations of an inner temperature measurement system in electric motors for an automotive application are then discussed. Lastly, the HW/SW design of a Smart Sensor Network is analyzed: the network features on-board data management and processing, integration in an IoT toolchain, Wireless Sensor Network developments and an AI framework for vibration-based structural assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The PhD activity described in the document is part of the Microsatellite and Microsystem Laboratory of the II Faculty of Engineering, University of Bologna. The main objective is the design and development of a GNSS receiver for the orbit determination of microsatellites in low earth orbit. The development starts from the electronic design and goes up to the implementation of the navigation algorithms, covering all the aspects that are involved in this type of applications. The use of GPS receivers for orbit determination is a consolidated application used in many space missions, but the development of the new GNSS system within few years, such as the European Galileo, the Chinese COMPASS and the Russian modernized GLONASS, proposes new challenges and offers new opportunities to increase the orbit determination performances. The evaluation of improvements coming from the new systems together with the implementation of a receiver that is compatible with at least one of the new systems, are the main activities of the PhD. The activities can be divided in three section: receiver requirements definition and prototype implementation, design and analysis of the GNSS signal tracking algorithms, and design and analysis of the navigation algorithms. The receiver prototype is based on a Virtex FPGA by Xilinx, and includes a PowerPC processor. The architecture follows the software defined radio paradigm, so most of signal processing is performed in software while only what is strictly necessary is done in hardware. The tracking algorithms are implemented as a combination of Phase Locked Loop and Frequency Locked Loop for the carrier, and Delay Locked Loop with variable bandwidth for the code. The navigation algorithm is based on the extended Kalman filter and includes an accurate LEO orbit model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most electronic systems can be described in a very simplified way as an assemblage of analog and digital components put all together in order to perform a certain function. Nowadays, there is an increasing tendency to reduce the analog components, and to replace them by operations performed in the digital domain. This tendency has led to the emergence of new electronic systems that are more flexible, cheaper and robust. However, no matter the amount of digital process implemented, there will be always an analog part to be sorted out and thus, the step of converting digital signals into analog signals and vice versa cannot be avoided. This conversion can be more or less complex depending on the characteristics of the signals. Thus, even if it is desirable to replace functions carried out by analog components by digital processes, it is equally important to do so in a way that simplifies the conversion from digital to analog signals and vice versa. In the present thesis, we have study strategies based on increasing the amount of processing in the digital domain in such a way that the implementation of analog hardware stages can be simplified. To this aim, we have proposed the use of very low quantized signals, i.e. 1-bit, for the acquisition and for the generation of particular classes of signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of dexterous robotic hands out of research laboratories has been limited by the intrinsic complexity that these devices present. This is directly reflected as an economically unreasonable cost and a low overall reliability. Within the research reported in this thesis it is shown how the problem of complexity in the design of robotic hands can be tackled, taking advantage of modern technologies (i.e. rapid prototyping), leading to innovative concepts for the design of the mechanical structure, the actuation and sensory systems. The solutions adopted drastically reduce the prototyping and production costs and increase the reliability, reducing the number of parts required and averaging their single reliability factors. In order to get guidelines for the design process, the problem of robotic grasp and manipulation by a dual arm/hand system has been reviewed. In this way, the requirements that should be fulfilled at hardware level to guarantee successful execution of the task has been highlighted. The contribution of this research from the manipulation planning side focuses on the redundancy resolution that arise in the execution of the task in a dexterous arm/hand system. In literature the problem of coordination of arm and hand during manipulation of an object has been widely analyzed in theory but often experimentally demonstrated in simplified robotic setup. Our aim is to cover the lack in the study of this topic and experimentally evaluate it in a complex system as a anthropomorphic arm hand system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the air traffic management system is to ensure the safe and efficient flow of air traffic. Therefore, while augmenting efficiency, throughput and capacity in airport operations, attention has rightly been placed on doing it in a safe manner. In the control tower, many advances in operational safety have come in the form of visualization tools for tower controllers. However, there is a paradox in developing such systems to increase controllers' situational awareness: by creating additional computer displays, the controller's vision is pulled away from the outside view and the time spent looking down at the monitors is increased. This reduces their situational awareness by forcing them to mentally and physically switch between the head-down equipment and the outside view. This research is based on the idea that augmented reality may be able to address this issue. The augmented reality concept has become increasingly popular over the past decade and is being proficiently used in many fields, such as entertainment, cultural heritage, aviation, military & defense. This know-how could be transferred to air traffic control with a relatively low effort and substantial benefits for controllers’ situation awareness. Research on this topic is consistent with SESAR objectives of increasing air traffic controllers’ situation awareness and enable up to 10 % of additional flights at congested airports while still increasing safety and efficiency. During the Ph.D., a research framework for prototyping augmented reality tools was set up. This framework consists of methodological tools for designing the augmented reality overlays, as well as of hardware and software equipment to test them. Several overlays have been designed and implemented in a simulated tower environment, which is a virtual reconstruction of Bologna airport control tower. The positive impact of such tools was preliminary assessed by means of the proposed methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through modelling activity, experimental campaigns, test bench and on-field validation, a complete powertrain for a BEV has been designed, assembled and used in a motorsport competition. The activity can be split in three main subjects, representing the three key components of an BEV vehicle. First of all a model of the entire powertrain has been developed in order to understand how the various design choices will influence the race lap-time. The data obtained was then used to design, build and test a first battery pack. After bench tests and track tests, it was understood that by using all the cell charac-teristics, without breaking the rules limitations, higher energy and power densities could have been achieved. An updated battery pack was then designed, produced and raced with at Motostudent 2018 re-sulting in a third place at debut. The second topic of this PhD was the design of novel inverter topologies. Three inverters have been de-signed, two of them using Gallium Nitride devices, a promising semiconductor technology that can achieve high switching speeds while maintaining low switching losses. High switching frequency is crucial to reduce the DC-Bus capacitor and then increase the power density of 3 phase inverters. The third in-verter uses classic Silicon devices but employs a ZVS (Zero Voltage Switching) topology. Despite the in-creased complexity of both the hardware and the control software, it can offer reduced switching losses by using conventional and established silicon mosfet technology. Finally, the mechanical parts of a three phase permanent magnet motor have been designed with the aim to employ it in UniBo Motorsport’s 2020 Formula Student car.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.