14 resultados para Data Acquisition Methods.
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
In this thesis work, a cosmic-ray telescope was set up in the INFN laboratories in Bologna using smaller size replicas of CMS Drift Tubes chambers, called MiniDTs, to test and develop new electronics for the CMS Phase-2 upgrade. The MiniDTs were assembled in INFN National Laboratory in Legnaro, Italy. Scintillator tiles complete the telescope, providing a signal independent of the MiniDTs for offline analysis. The telescope readout is a test system for the CMS Phase-2 upgrade data acquisition design. The readout is based on the early prototype of a radiation-hard FPGA-based board developed for the High Luminosity LHC CMS upgrade, called On Board electronics for Drift Tubes. Once the set-up was operational, we developed an online monitor to display in real-time the most important observables to check the quality of the data acquisition. We performed an offline analysis of the collected data using a custom version of CMS software tools, which allowed us to estimate the time pedestal and drift velocity in each chamber, evaluate the efficiency of the different DT cells, and measure the space and time resolution of the telescope system.
Resumo:
Currently making digital 3D models and replicas of the cultural heritage assets play an important role in the preservation and having a high detail source for future research and intervention. In this dissertation, it is tried to assess different methods for digital surveying and making 3D replicas of cultural heritage assets in different scales of size. The methodologies vary in devices, software, workflow, and the amount of skill that is required. The three phases of the 3D modelling process are data acquisition, modelling, and model presentation. Each of these sections is divided into sub-sections and there are several approaches, methods, devices, and software that may be employed, furthermore, the selection process should be based on the operation's goal, available facilities, the scale and properties of the object or structure to be modeled, as well as the operators' expertise and experience. The most key point to remember is that the 3D modelling operation should be properly accurate, precise, and reliable; therefore, there are so many instructions and pieces of advice on how to perform 3D modelling effectively. It is an attempt to compare and evaluate the various ways of each phase in order to explain and demonstrate their differences, benefits, and drawbacks in order to serve as a simple guide for new and/or inexperienced users.
Resumo:
This thesis presents a CMOS Amplifier with High Common Mode rejection designed in UMC 130nm technology. The goal is to achieve a high amplification factor for a wide range of biological signals (with frequencies in the range of 10Hz-1KHz) and to reject the common-mode noise signal. It is here presented a Data Acquisition System, composed of a Delta-Sigma-like Modulator and an antenna, that is the core of a portable low-complexity radio system; the amplifier is designed in order to interface the data acquisition system with a sensor that acquires the electrical signal. The Modulator asynchronously acquires and samples human muscle activity, by sending a Quasi-Digital pattern that encodes the acquired signal. There is only a minor loss of information translating the muscle activity using this pattern, compared to an encoding technique which uses astandard digital signal via Impulse-Radio Ultra-Wide Band (IR-UWB). The biological signals, needed for Electromyographic analysis, have an amplitude of 10-100μV and need to be highly amplified and separated from the overwhelming 50mV common mode noise signal. Various tests of the firmness of the concept are presented, as well the proof that the design works even with different sensors, such as Radiation measurement for Dosimetry studies.
Resumo:
In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.
Resumo:
Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.
Resumo:
I recenti sviluppi nel campo dell’intelligenza artificiale hanno permesso una più adeguata classificazione del segnale EEG. Negli ultimi anni è stato dimostrato come sia possibile ottenere ottime performance di classificazione impiegando tecniche di Machine Learning (ML) e di Deep Learning (DL), facendo uso, per quest’ultime, di reti neurali convoluzionali (Convolutional Neural Networks, CNN). In particolare, il Deep Learning richiede molti dati di training mentre spesso i dataset per EEG sono limitati ed è difficile quindi raggiungere prestazioni elevate. I metodi di Data Augmentation possono alleviare questo problema. Partendo da dati reali, questa tecnica permette, la creazione di dati artificiali fondamentali per aumentare le dimensioni del dataset di partenza. L’applicazione più comune è quella di utilizzare i Data Augmentation per aumentare le dimensioni del training set, in modo da addestrare il modello/rete neurale su un numero di campioni più esteso, riducendo gli errori di classificazione. Partendo da questa idea, i Data Augmentation sono stati applicati in molteplici campi e in particolare per la classificazione del segnale EEG. In questo elaborato di tesi, inizialmente, vengono descritti metodi di Data Augmentation implementati nel corso degli anni, utilizzabili anche nell’ambito di applicazioni EEG. Successivamente, si presentano alcuni studi specifici che applicano metodi di Data Augmentation per migliorare le presentazioni di classificatori basati su EEG per l’identificazione dello stato sonno/veglia, per il riconoscimento delle emozioni, e per la classificazione di immaginazione motoria.
Resumo:
Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.
Resumo:
Il lavoro di questa tesi riguarda principalmente la progettazione, simulazione e test di laboratorio di tre versioni successive di schede VME, chiamate Read Out Driver (ROD), che sono state fabbricate per l'upgrade del 2014 dell'esperimento ATLAS Insertable B-Layer (IBL) al CERN. IBL è un nuovo layer che diverrà parte del Pixel Detector di ATLAS. Questa tesi si compone di una panoramica descrittiva dell'esperimento ATLAS in generale per poi concentrarsi sulla descrizione del layer specifico IBL. Inoltre tratta in dettaglio aspetti fisici e tecnici: specifiche di progetto, percorso realizzativo delle schede e test conseguenti. Le schede sono state dapprima prodotte in due prototipi per testare le prestazioni del sistema. Queste sono state fabbricate al fine di valutare le caratteristiche e prestazioni complessive del sistema di readout. Un secondo lotto di produzione, composto di cinque schede, è stato orientato alla correzione fine delle criticità emerse dai test del primo lotto. Un'indagine fine e approfondita del sistema ha messo a punto le schede per la fabbricazione di un terzo lotto di altre cinque schede. Attualmente la produzione è finita e complessivamente sono state realizzate 20 schede definitive che sono in fase di test. La produzione sarà validata prossimamente e le 20 schede verranno consegnate al CERN per essere inserite nel sistema di acquisizione dati del rivelatore. Al momento, il Dipartimento di Fisica ed Astronomia dell'Università di Bologna è coinvolto in un esperimento a pixel solamente attravers IBL descritto in questa tesi. In conclusione, il lavoro di tesi è stato prevalentemente focalizzato sui test delle schede e sul progetto del firmware necessario per la calibrazione e per la presa dati del rivelatore.
Resumo:
The present study was conducted to investigate the influence of restricted food access on Solea senegalensis behaviour and daily expression of clock genes in central (diencephalon and optic tectum) and pheripheral (liver) tissues. The Senegalese sole is a marine teleost fish belonging to the Class of Actinopterygii, Order Pleuronectiformes and Family Soleidae. Its geographical distribution in the Mediterranean sea is fairly broad, covering the south and east of the Iberian Peninsula, the North of Africa and Middle East until the coast of Turkey. From a commercial perspective Solea senegalensis has acquired in recent years, a key role in aquacolture industry of the Iberian Peninsula. The Senegalese sole is also acquiring an important relevance in chronobiological studies as the number of published works focused on the sole circadian system has increased in the last few years. The molecular mechanisms underlying sole circadian rhythms has also been explored recently, both in adults and developing sole. Moreover, the consideration of the Pleuronectiformes Order as one of the most evolved teleost groups make the Senegalese sole a species of high interest under a comparative and phylogenetic point of view. All these facts have reinforced the election of Senegalese sole as model species for the present study. The animals were kept under 12L:12D photoperiod conditions and divided into three experimental groups depending on the feeding time: fed at midlight (ML), middark (MD) or random (RND) times. Throughout the experiment, the existence of a daily activity rhythm and it synchronization to the light-dark and feeding cycles was checked. To this end locomotor activity was registred by means of two infrared photocells placed in pvc tube 10 cm below the water surface (upper photocell) and the other one was located 10 cm above the bottom of the tank (bottom photocell). The photocell were connected to a computer so that every time a fish interrupted the infrared light beam, it produced an output signal that was recorded. The number of light beam interruptions was stored every 10 minutes by specialized software for data acquisition.
Resumo:
In the last decade, the mechanical characterization of bone segments has been seen as a fundamental key to understanding how the distribution of physiological loads works on the bone in everyday life, and the resulting structural deformations. Therefore, characterization allows to obtain the main load directions and, consequently, to observe the structural lamellae of the bone disposal, in order to recreate a prosthesis using artificial materials that behave naturally. This thesis will expose a modular system which provides the mechanical characterization of bone in vitro segment, with particular attention to vertebrae, as the current object of study and research in the lab where I did my thesis work. The system will be able to acquire and process all the appropriately conditioned signals of interest for the test, through dedicated hardware and software architecture, with high speed and high reliability. The aim of my thesis is to create a system that can be used as a versatile tool for experimentation and innovation for future tests of the mechanical characterization of biological components, allowing a quantitative and qualitative assessment of the deformation in analysis, regardless of anatomical regions of interest.
Resumo:
Opportunistic diseases caused by Human Immunodeficiency Virus (HIV) and Hepatitis B Virus (HBV) is an omnipresent global challenge. In order to manage these epidemics, we need to have low cost and easily deployable platforms at the point-of-care in high congestions regions like airports and public transit systems. In this dissertation we present our findings in using Localized Surface Plasmon Resonance (LSPR)-based detection of pathogens and other clinically relevant applications using microfluidic platforms at the point-of-care setting in resource constrained environment. The work presented here adopts the novel technique of LSPR to multiplex a lab-on-a-chip device capable of quantitatively detecting various types of intact viruses and its various subtypes, based on the principle of a change in wavelength occurring when metal nano-particle surface is modified with a specific surface chemistry allowing the binding of a desired pathogen to a specific antibody. We demonstrate the ability to detect and quantify subtype A, B, C, D, E, G and panel HIV with a specificity of down to 100 copies/mL using both whole blood sample and HIV-patient blood sample discarded from clinics. These results were compared against the gold standard Reverse Transcriptase Polymerase Chain Reaction (RT-qPCR). This microfluidic device has a total evaluation time for the assays of about 70 minutes, where 60 minutes is needed for the capture and 10 minutes for data acquisition and processing. This LOC platform eliminates the need for any sample preparation before processing. This platform is highly multiplexable as the same surface chemistry can be adapted to capture and detect several other pathogens like dengue virus, E. coli, M. Tuberculosis, etc.
Resumo:
Questo elaborato di tesi è nato con l'esigenza di sviluppare un nuovo modulo per la stima di variabili energetiche da inserire nel software On.Energy, per dare la possibilità a tutti gli utilizzatori di capire quanto i valori osservati nella realtà si discostino da un modello teorico appositamente creato, e fornire quindi un ulteriore strumento di analisi per mantenere sotto controllo il sistema nell'ottica del miglioramento continuo. Il risultato sarà uno strumento che verrà provato in via sperimentale sugli stabilimenti di due aziende leader in Italia in settori produttivi largamente differenti (Amadori - alimentare, Pfizer - farmaceutico), ma accomunati dalle esigenze di monitorare, analizzare e efficientare i consumi energetici.
Experimental characterization and modelling of a servo-pneumatic system for a knee loading apparatus
Resumo:
The new knee test rig developed in University of Bologna used pneumatic cylinder as actuator system. Specific characterization and modelling about the pneumatic cylinder and the related devices are needed in better controlling the test rig. In this thesis, an experimental environment for the related device is set up with data acquisition system using Real-time Windows Target, Simulink, MatLab. Based on the experimental data, a fitted model for the pneumatic cylinder friction is found.
Resumo:
Le numerose osservazioni compiute a partire dagli anni `30 confermano che circa il 26% dell'Universo è costituito da materia oscura. Tale materia ha la particolarità di interagire solo gravitazionalmente e debolmente: essa si presenta massiva e neutra. Tra le numerose ipotesi avanzate riguardanti la natura della materia oscura una delle più accreditate è quella delle WIMP (Weakly Interacting Massive Particle). Il progetto all'avanguardia nella ricerca diretta delle WIMP è XENON presso i Laboratori Nazionali del Gran Sasso (LNGS). Tale esperimento è basato sulla diffusione elastica delle particelle ricercate su nuclei di Xeno: il rivelatore utilizzato è una TPC a doppia fase (liquido-gas). La rivelazione diretta di materia oscura prevede l'impiego di un rivelatore molto grande a causa della piccola probabilità di interazione e di ambienti a bassa radioattività naturale, per ridurre al minimo il rumore di fondo. Nell'ottica di migliorare la sensibilità del rivelatore diminuendo l'energia di soglia sono in fase di ricerca e sviluppo soluzioni alternative a quelle adottate attualmente. Una di tali soluzioni prevede l'utilizzo di fotorivelatori di tipo SiPM da affiancare ai normali PMT in uso. I fotorivelatori al silicio devono lavorare ad una temperatura di (circa 170 K) e devono rivelare fotoni di lunghezza d'onda di circa 175 nm. Il presente lavoro di tesi si colloca nell'ambito di tale progetto di ricerca e sviluppo. Lo scopo di tale lavoro è stato la scrittura di un programma DAQ in ambiente LabVIEW per acquisire dati per caratterizzare in aria fotorivelatori di tipo SiPM. In seguito con tale programma sono state effettuate misure preliminari di pedestallo da cui è stato possibile determinare l'andamento di guadagno e di dark rate al variare della tensione di alimentazione del SiPM. L'analisi dati è stata effettuata impiegando un programma scritto in C++ in grado di analizzare le forme d'onda acquisite dal programma LabVIEW.