918 resultados para angular speed
Resumo:
Combustion control is one of the key factors to obtain better performances and lower pollutant emissions for diesel, spark ignition and HCCI engines. An algorithm that allows estimating, as an example, the mean indicated torque for each cylinder, could be easily used in control strategies, in order to carry out cylinders trade-off, control the cycle to cycle variation, or detect misfires. A tool that allows evaluating the 50% of Mass Fraction Burned (MFB50), or the net Cumulative Heat Release (CHRNET), or the ROHR peak value (Rate of Heat Release), could be used to optimize spark advance or to detect knock in gasoline engines and to optimize injection pattern in diesel engines. Modern management systems are based on the control of the mean indicated torque produced by the engine: they need a real or virtual sensor in order to compare the measured value with the target one. Many studies have been performed in order to obtain an accurate and reliable over time torque estimation. The aim of this PhD activity was to develop two different algorithms: the first one is based on the instantaneous engine speed fluctuations measurement. The speed signal is picked up directly from the sensor facing the toothed wheel mounted on the engine for other control purposes. The engine speed fluctuation amplitudes depend on the combustion and on the amount of torque delivered by each cylinder. The second algorithm processes in-cylinder pressure signals in the angular domain. In this case a crankshaft encoder is not necessary, because the angular reference can be obtained using a standard sensor wheel. The results obtained with these two methodologies are compared in order to evaluate which one is suitable for on board applications, depending on the accuracy required.
Resumo:
Modern Internal Combustion Engines are becoming increasingly complex in terms of their control systems and strategies. The growth of the algorithms’ complexity results in a rise of the number of on-board quantities for control purposes. In order to improve combustion efficiency and, simultaneously, limit the amount of pollutant emissions, the on-board evaluation of two quantities in particular has become essential; namely indicated torque produced by the engine and the angular position where 50% of fuel mass injected over an engine cycle is burned (MFB50). The above mentioned quantities can be evaluated through the measurement of in-cylinder pressure. Nonetheless, at the time being, the installation of in-cylinder pressure sensors on vehicles is extremely uncommon mainly because of measurement reliability and costs. This work illustrates a methodological approach for the estimation of indicated torque and MFB50 that is based on the engine speed fluctuation measurement. This methodology is compatible with the typical on-board application restraints. Moreover, it requires no additional costs since speed can be measured using the system already mounted on the vehicle, which is made of a magnetic pick-up faced to a toothed wheel. The estimation algorithm consists of two main parts: first, the evaluation of indicated torque fluctuation based on speed measurement and secondly, the evaluation of the mean value of the indicated torque (over an engine cycle) and MFB50 by using the relationship with the indicated torque harmonic and other engine quantities. The procedure has been successfully applied to an L4 turbocharged Diesel engine mounted on-board a vehicle.
Resumo:
The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
The development of a multibody model of a motorbike engine cranktrain is presented in this work, with an emphasis on flexible component model reduction. A modelling methodology based upon the adoption of non-ideal joints at interface locations, and the inclusion of component flexibility, is developed: both are necessary tasks if one wants to capture dynamic effects which arise in lightweight, high-speed applications. With regard to the first topic, both a ball bearing model and a journal bearing model are implemented, in order to properly capture the dynamic effects of the main connections in the system: angular contact ball bearings are modelled according to a five-DOF nonlinear scheme in order to grasp the crankshaft main bearings behaviour, while an impedance-based hydrodynamic bearing model is implemented providing an enhanced operation prediction at the conrod big end locations. Concerning the second matter, flexible models of the crankshaft and the connecting rod are produced. The well-established Craig-Bampton reduction technique is adopted as a general framework to obtain reduced model representations which are suitable for the subsequent multibody analyses. A particular component mode selection procedure is implemented, based on the concept of Effective Interface Mass, allowing an assessment of the accuracy of the reduced models prior to the nonlinear simulation phase. In addition, a procedure to alleviate the effects of modal truncation, based on the Modal Truncation Augmentation approach, is developed. In order to assess the performances of the proposed modal reduction schemes, numerical tests are performed onto the crankshaft and the conrod models in both frequency and modal domains. A multibody model of the cranktrain is eventually assembled and simulated using a commercial software. Numerical results are presented, demonstrating the effectiveness of the implemented flexible model reduction techniques. The advantages over the conventional frequency-based truncation approach are discussed.
Resumo:
L'elimodellismo è una passione che lega un numero sempre maggiore di persone: nuove manifestazioni vengono organizzate in tutto il mondo e nuove discipline vengono continuamente proposte. Questo è il caso della disciplina speed in cui i piloti si sfidano a far volare i propri elimodelli alle massime velocità. L'azienda SAB Heli Division s.r.l., come produttore di pale per elimodelli e della serie di elicotteri Goblin, ha interesse a sostenere i propri piloti con le proprie macchine, facendo sì che siano veloci e competitive. Per questo ha voluto sviluppare una pala che, montata sul proprio elicottero specifico per questa disciplina, possa vincere la concorrenza con l'ambizione di stabilire un primato di velocità a livello internazionale. Il problema è quindi quello di sviluppare una pala che ottimizzasse al meglio le caratteristiche dell'elimodello Goblin Speed, in modo da sfruttare al meglio la potenza installata a bordo. Per via dei limiti sui mezzi a disposizione l'ottimizzazione è stata portata avanti mediante la teoria dell'elemento di pala. Si è impostato il calcolo determinando la potenza media su una rotazione del rotore in volo avanzato a 270 km/h e quindi attraverso gli algoritmi di ottimizzazione globale presenti nel codice di calcolo MATLAB si è cercato il rotore che permettesse il volo a tale velocità al variare del raggio del disco del rotore, dello svergolamento della pala e della distribuzione di corda lungo la pala. Per far sì che si abbiano risultati più precisi si sono sfruttati alcuni modelli per stimare il campo di velocità indotta o gli effetti dello stallo dinamico. Inoltre sono state stimate altre grandezze di cui non sono noti i dati reali o di cui è troppo complesso, per le conoscenze a disposizione, avere un dato preciso. Si è tuttavia cercato di avere stime verosimili. Alcune di queste grandezze sono le caratteristiche aerodinamiche del profilo NACA 0012 utilizzato, ottenute mediante analisi CFD bidimensionale, i comandi di passo collettivo e ciclico che equilibrano il velivolo e la resistenza aerodinamica dell'intero elimodello. I risultati del calcolo sono stati confrontati innanzitutto con le soluzioni già adottate dall'azienda. Quindi si è proceduto alla realizzazione della pala e mediante test di volo si è cercato di valutare le prestazioni della macchina che monta la pala ottenuta. Nonostante le approssimazioni adottate si è osservato che la pala progettata a partire dai risultati dell'ottimizzazione rispecchia la filosofia adottata: per velocità paragonabili a quelle ottenute con le pale prodotte da SAB Heli Division, le potenze richieste sono effettivamente inferiori. Tuttavia non è stato possibile ottenere un vero e proprio miglioramento della velocità di volo, presumibilmente a causa delle stime delle caratteristiche aerodinamiche delle diverse parti del Goblin Speed.
Resumo:
The beta-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient "a". A first test period (2005/ 2006) showed the “proof-of-principles”. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient "a" (publication: Baessler et al., 2008, Europhys. Journ. A, 38, p.17-26). A second measurement cycle (2007/ 2008) aimed to under-run the relative accuracy of previous experiments (Stratowa et al. (1978), Byrne et al. (2002)) da/a =5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to da/a(syst.)=0.61 %. The statistical accuracy of the analyzed measurements is da/a(stat.)=1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects will be discussed in the last chapter.
Resumo:
The aim of this study was to examine whether a real high speed-short term competition influences clinicopathological data focusing on muscle enzymes, iron profile and Acute Phase Proteins. 30 Thoroughbred racing horses (15 geldings and 15 females) aged between 4-12 years (mean 7 years), were used for the study. All the animals performed a high speed-short term competition for a total distance of 154 m in about 12 seconds, repeated 8 times, within approximately one hour (Niballo Horse Race). Blood samples were obtained 24 hours before and within 30 minutes after the end of the races. On all samples were performed a complete blood count (CBC), biochemical and haemostatic profiles. The post-race concentrations for the single parameter were corrected using an estimation of the plasma volume contraction according to the individual Alb concentration. Data were analysed with descriptive statistics and the percentage of variation from the baseline values were recorded. Pre- and post-race results were compared with non-parametric statistics (Mann Whitney U test). A difference was considered significant at p<0.05. A significant plasma volume contraction after the race was detected (Hct, Alb; p<0.01). Other relevant findings were increased concentrations of muscular enzymes (CK, LDH; p<0.01), Crt (p<0.01), significant increased uric acid (p<0.01), a significant decrease of haptoglobin (p<0.01) associated to an increase of ferritin concentrations (p<0.01), significant decrease of fibrinogen (p<0.05) accompanied by a non-significant increase of D-Dimers concentrations (p=0.08). This competition produced relevant abnormalities on clinical pathology in galloping horses. This study confirms a significant muscular damage, oxidative stress, intravascular haemolysis and subclinical hemostatic alterations. Further studies are needed to better understand the pathogenesis, the medical relevance and the impact on performance of these alterations in equine sport medicine.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.
Resumo:
In this Thesis, we study the accretion of mass and angular momentum onto the disc of spiral galaxies from a global and a local perspective and comparing theory predictions with several observational data. First, we propose a method to measure the specific mass and radial growth rates of stellar discs, based on their star formation rate density profiles and we apply it to a sample of nearby spiral galaxies. We find a positive radial growth rate for almost all galaxies in our sample. Our galaxies grow in size, on average, at one third of the rate at which they grow in mass. Our results are in agreement with theoretical expectations if known scaling relations of disc galaxies are not evolving with time. We also propose a novel method to reconstruct accretion profiles and the local angular momentum of the accreting material from the observed structural and chemical properties of spiral galaxies. Applied to the Milky Way and to one external galaxy, our analysis indicates that accretion occurs at relatively large radii and has a local deficit of angular momentum with respect to the disc. Finally, we show how structure and kinematics of hot gaseous coronae, which are believed to be the source of mass and angular momentum of massive spiral galaxies, can be reconstructed from their angular momentum and entropy distributions. We find that isothermal models with cosmologically motivated angular momentum distributions are compatible with several independent observational constraints. We also consider more complex baroclinic equilibria: we describe a new parametrization for these states, a new self-similar family of solution and a method for reconstructing structure and kinematics from the joint angular momentum/entropy distribution.
Resumo:
Il concetto di inflazione e' stato introdotto nei primi anni ’80 per risolvere alcuni problemi del modello cosmologico standard, quali quello dell’orizzonte e quello della piattezza. Le predizioni dei piu' semplici modelli inflazionari sono in buon accordo con le osservazioni cosmologiche piu'recenti, che confermano sezioni spaziali piatte e uno spettro di fluttuazioni primordiali con statistica vicina a quella gaussiana. I piu' recenti dati di Planck, pur in ottimo accordo con una semplice legge di potenza per lo spettro a scale k > 0.08 Mpc−1 , sembrano indicare possibili devi- azioni a scale maggiori, seppur non a un livello statisticamente significativo a causa della varianza cosmica. Queste deviazioni nello spettro possono essere spiegate da modelli inflazionari che includono una violazione della condizione di lento rotolamento (slow-roll ) e che hanno precise predizioni per lo spettro. Per uno dei primi modelli, caratterizzato da una discontinuita' nella derivata prima del potenziale proposto da Starobinsky, lo spettro ed il bispettro delle fluttuazioni primordiali sono noti analiticamente. In questa tesi estenderemo tale modello a termini cinetici non standard, calcolandone analiticamente il bispettro e confrontando i risultati ottenuti con quanto presente in letteratura. In particolare, l’introduzione di un termine cinetico non standard permettera' di ottenere una velocita' del suono per l’inflatone non banale, che consentira' di estendere i risultati noti, riguardanti il bispettro, per questo modello. Innanzitutto studieremo le correzioni al bispettro noto in letteratura dovute al fatto che in questo caso la velocita' del suono e' una funzione dipendente dal tempo; successivamente, cercheremo di calcolare analiticamente un ulteriore contributo al bispettro proporzionale alla derivata prima della velocita' del suono (che per il modello originale e' nullo).
Resumo:
In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.