8 resultados para Sound speed
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Several diagnostic techniques are presented for the detection of electrical fault in induction motor variable speed drives. These techinques are developed taking into account the impact of the control system on machine variables and non stationary operating conditions.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
Lo studio presentato in questa sede concerne applicazioni di saldatura LASER caratterizzate da aspetti di non-convenzionalità ed è costituito da tre filoni principali. Nel primo ambito di intervento è stata valutata la possibilità di effettuare saldature per fusione, con LASER ad emissione continua, su pannelli Aluminum Foam Sandwich e su tubi riempiti in schiuma di alluminio. Lo studio ha messo in evidenza numerose linee operative riguardanti le problematiche relative alla saldatura delle pelli esterne dei componenti ed ha dimostrato la fattibilità relativa ad un approccio di giunzione LASER integrato (saldatura seguita da un post trattamento termico) per la realizzazione della giunzione completa di particolari tubolari riempiti in schiuma con ripristino della struttura cellulare all’interfaccia di giunzione. Il secondo ambito di intervento è caratterizzato dall’applicazione di una sorgente LASER di bassissima potenza, operante in regime ad impulsi corti, nella saldatura di acciaio ad elevato contenuto di carbonio. Lo studio ha messo in evidenza come questo tipo di sorgente, solitamente applicata per lavorazioni di ablazione e marcatura, possa essere applicata anche alla saldatura di spessori sub-millimetrici. In questa fase è stato messo in evidenza il ruolo dei parametri di lavoro sulla conformazione del giunto ed è stata definita l’area di fattibilità del processo. Lo studio è stato completato investigando la possibilità di applicare un trattamento LASER dopo saldatura per addolcire le eventuali zone indurite. In merito all’ultimo ambito di intervento l’attività di studio si è focalizzata sull’utilizzo di sorgenti ad elevata densità di potenza (60 MW/cm^2) nella saldatura a profonda penetrazione di acciai da costruzione. L’attività sperimentale e di analisi dei risultati è stata condotta mediante tecniche di Design of Experiment per la valutazione del ruolo preciso di tutti i parametri di processo e numerose considerazioni relative alla formazione di cricche a caldo sono state suggerite.
Resumo:
The aim of this study was to examine whether a real high speed-short term competition influences clinicopathological data focusing on muscle enzymes, iron profile and Acute Phase Proteins. 30 Thoroughbred racing horses (15 geldings and 15 females) aged between 4-12 years (mean 7 years), were used for the study. All the animals performed a high speed-short term competition for a total distance of 154 m in about 12 seconds, repeated 8 times, within approximately one hour (Niballo Horse Race). Blood samples were obtained 24 hours before and within 30 minutes after the end of the races. On all samples were performed a complete blood count (CBC), biochemical and haemostatic profiles. The post-race concentrations for the single parameter were corrected using an estimation of the plasma volume contraction according to the individual Alb concentration. Data were analysed with descriptive statistics and the percentage of variation from the baseline values were recorded. Pre- and post-race results were compared with non-parametric statistics (Mann Whitney U test). A difference was considered significant at p<0.05. A significant plasma volume contraction after the race was detected (Hct, Alb; p<0.01). Other relevant findings were increased concentrations of muscular enzymes (CK, LDH; p<0.01), Crt (p<0.01), significant increased uric acid (p<0.01), a significant decrease of haptoglobin (p<0.01) associated to an increase of ferritin concentrations (p<0.01), significant decrease of fibrinogen (p<0.05) accompanied by a non-significant increase of D-Dimers concentrations (p=0.08). This competition produced relevant abnormalities on clinical pathology in galloping horses. This study confirms a significant muscular damage, oxidative stress, intravascular haemolysis and subclinical hemostatic alterations. Further studies are needed to better understand the pathogenesis, the medical relevance and the impact on performance of these alterations in equine sport medicine.
Resumo:
The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.