938 resultados para Speed Violation.
Resumo:
Charmless charged two-body B decays are sensitive probes of the CKM matrix, that parameterize CP violation in the Standard Model (SM), and have the potential to reveal the presence of New Physics. The framework of CP violation within the SM, the role of the CKM matrix, with its basic formalism, and the current experimental status are presented. The theoretical tools commonly used to deal with hadronic B decays and an overview of the phenomenology of charmless two-body B decays are outlined. LHCb is one of the four main experiments operating at the Large Hadron Collider (LHC), devoted to the measurement of CP violation and rare decays of charm and beauty hadrons. The LHCb detector is described, focusing on the technologies adopted for each sub-detector and summarizing their performances. The status-of-the-art of the LHCb measurements with charmless two-body B decays is then presented. Using the 37/pb of integrated luminosity collected at sqrt(s) = 7 TeV by LHCb during 2010, the direct CP asymmetries ACP(B0 -> Kpi) = −0.074 +/- 0.033 +/- 0.008 and ACP(Bs -> piK) = 0.15 +/- 0.19 +/- 0.02 are measured. Using 320/pb of integrated luminosity collected during 2011 these measurements are updated to ACP(B0 -> Kpi) = −0.088 +/- 0.011 +/- 0.008 and ACP(Bs -> piK) = 0.27 +/- 0.08 +/- 0.02. In addition, the branching ratios BR(B0 -> K+K-) = (0.13+0.06-0.05 +/- 0.07) x 10^-6 and BR(Bs -> pi+pi-) = (0.98+0.23-0.19 +/- 0.11) x 10^-6 are measured. Finally, using a sample of 370/pb of integrated luminosity collected during 2011, the relative branching ratios BR(B0 -> pi+pi-)/BR(B0 -> Kpi) = 0.262 +/- 0.009 +/- 0.017, (fs/fd)BR(Bs -> K+K-)/BR(B0 -> Kpi)=0.316 +/- 0.009 +/- 0.019, (fs/fd)BR(Bs -> piK)/BR(B0 -> Kpi) = 0.074 +/- 0.006 +/- 0.006 and BR(Lambda_b -> ppi)/BR(Lambda_b -> pK)=0.86 +/- 0.08 +/- 0.05 are determined.
Resumo:
The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
L'elimodellismo è una passione che lega un numero sempre maggiore di persone: nuove manifestazioni vengono organizzate in tutto il mondo e nuove discipline vengono continuamente proposte. Questo è il caso della disciplina speed in cui i piloti si sfidano a far volare i propri elimodelli alle massime velocità. L'azienda SAB Heli Division s.r.l., come produttore di pale per elimodelli e della serie di elicotteri Goblin, ha interesse a sostenere i propri piloti con le proprie macchine, facendo sì che siano veloci e competitive. Per questo ha voluto sviluppare una pala che, montata sul proprio elicottero specifico per questa disciplina, possa vincere la concorrenza con l'ambizione di stabilire un primato di velocità a livello internazionale. Il problema è quindi quello di sviluppare una pala che ottimizzasse al meglio le caratteristiche dell'elimodello Goblin Speed, in modo da sfruttare al meglio la potenza installata a bordo. Per via dei limiti sui mezzi a disposizione l'ottimizzazione è stata portata avanti mediante la teoria dell'elemento di pala. Si è impostato il calcolo determinando la potenza media su una rotazione del rotore in volo avanzato a 270 km/h e quindi attraverso gli algoritmi di ottimizzazione globale presenti nel codice di calcolo MATLAB si è cercato il rotore che permettesse il volo a tale velocità al variare del raggio del disco del rotore, dello svergolamento della pala e della distribuzione di corda lungo la pala. Per far sì che si abbiano risultati più precisi si sono sfruttati alcuni modelli per stimare il campo di velocità indotta o gli effetti dello stallo dinamico. Inoltre sono state stimate altre grandezze di cui non sono noti i dati reali o di cui è troppo complesso, per le conoscenze a disposizione, avere un dato preciso. Si è tuttavia cercato di avere stime verosimili. Alcune di queste grandezze sono le caratteristiche aerodinamiche del profilo NACA 0012 utilizzato, ottenute mediante analisi CFD bidimensionale, i comandi di passo collettivo e ciclico che equilibrano il velivolo e la resistenza aerodinamica dell'intero elimodello. I risultati del calcolo sono stati confrontati innanzitutto con le soluzioni già adottate dall'azienda. Quindi si è proceduto alla realizzazione della pala e mediante test di volo si è cercato di valutare le prestazioni della macchina che monta la pala ottenuta. Nonostante le approssimazioni adottate si è osservato che la pala progettata a partire dai risultati dell'ottimizzazione rispecchia la filosofia adottata: per velocità paragonabili a quelle ottenute con le pale prodotte da SAB Heli Division, le potenze richieste sono effettivamente inferiori. Tuttavia non è stato possibile ottenere un vero e proprio miglioramento della velocità di volo, presumibilmente a causa delle stime delle caratteristiche aerodinamiche delle diverse parti del Goblin Speed.
Resumo:
The aim of this study was to examine whether a real high speed-short term competition influences clinicopathological data focusing on muscle enzymes, iron profile and Acute Phase Proteins. 30 Thoroughbred racing horses (15 geldings and 15 females) aged between 4-12 years (mean 7 years), were used for the study. All the animals performed a high speed-short term competition for a total distance of 154 m in about 12 seconds, repeated 8 times, within approximately one hour (Niballo Horse Race). Blood samples were obtained 24 hours before and within 30 minutes after the end of the races. On all samples were performed a complete blood count (CBC), biochemical and haemostatic profiles. The post-race concentrations for the single parameter were corrected using an estimation of the plasma volume contraction according to the individual Alb concentration. Data were analysed with descriptive statistics and the percentage of variation from the baseline values were recorded. Pre- and post-race results were compared with non-parametric statistics (Mann Whitney U test). A difference was considered significant at p<0.05. A significant plasma volume contraction after the race was detected (Hct, Alb; p<0.01). Other relevant findings were increased concentrations of muscular enzymes (CK, LDH; p<0.01), Crt (p<0.01), significant increased uric acid (p<0.01), a significant decrease of haptoglobin (p<0.01) associated to an increase of ferritin concentrations (p<0.01), significant decrease of fibrinogen (p<0.05) accompanied by a non-significant increase of D-Dimers concentrations (p=0.08). This competition produced relevant abnormalities on clinical pathology in galloping horses. This study confirms a significant muscular damage, oxidative stress, intravascular haemolysis and subclinical hemostatic alterations. Further studies are needed to better understand the pathogenesis, the medical relevance and the impact on performance of these alterations in equine sport medicine.
Resumo:
The LHCb experiment at the LHC, by exploiting the high production cross section for $c\overline{c}$ quark pairs, offers the possibility to investigate $\mathcal{CP}$ violation in the charm sector with a very high precision.\\ In this thesis a measurement of time-integrated \(\mathcal{CP}\) violation using $D^0\rightarrow~K^+K^-$ and $D^0\rightarrow \pi^+\pi^-$ decays at LHCb is presented. The measured quantity is the difference ($\Delta$) of \(\mathcal{CP}\) asymmetry ($\mathcal{A}_{\mathcal{CP}}$) between the decay rates of $D^0$ and $\overline{D}^0$ mesons into $K^+K^–$ and $\pi^+\pi^-$ pairs.\\ The analysis is performed on 2011 data, collected at \(\sqrt{s}=7\) TeV and corresponding to an integrated luminosity of 1 fb\(^{-1}\), and 2012 data, collected at \(\sqrt{s}=8\) TeV and corresponding to an integrated luminosity of 2 fb\(^{-1}\).\\ A complete study of systematic uncertainties is beyond the aim of this thesis. However the most important systematic of the previous analysis has been studied. We find that this systematic uncertainty was due to a statistical fluctuation and then we demonstrate that it is no longer necessary to take into account.\\ By combining the 2011 and 2012 results, the final statistical precision is 0.08\%. When this analysis will be completed and published, this will be the most precise single measurement in the search for $\mathcal{CP}$ violation in the charm sector.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.
Resumo:
Il concetto di inflazione e' stato introdotto nei primi anni ’80 per risolvere alcuni problemi del modello cosmologico standard, quali quello dell’orizzonte e quello della piattezza. Le predizioni dei piu' semplici modelli inflazionari sono in buon accordo con le osservazioni cosmologiche piu'recenti, che confermano sezioni spaziali piatte e uno spettro di fluttuazioni primordiali con statistica vicina a quella gaussiana. I piu' recenti dati di Planck, pur in ottimo accordo con una semplice legge di potenza per lo spettro a scale k > 0.08 Mpc−1 , sembrano indicare possibili devi- azioni a scale maggiori, seppur non a un livello statisticamente significativo a causa della varianza cosmica. Queste deviazioni nello spettro possono essere spiegate da modelli inflazionari che includono una violazione della condizione di lento rotolamento (slow-roll ) e che hanno precise predizioni per lo spettro. Per uno dei primi modelli, caratterizzato da una discontinuita' nella derivata prima del potenziale proposto da Starobinsky, lo spettro ed il bispettro delle fluttuazioni primordiali sono noti analiticamente. In questa tesi estenderemo tale modello a termini cinetici non standard, calcolandone analiticamente il bispettro e confrontando i risultati ottenuti con quanto presente in letteratura. In particolare, l’introduzione di un termine cinetico non standard permettera' di ottenere una velocita' del suono per l’inflatone non banale, che consentira' di estendere i risultati noti, riguardanti il bispettro, per questo modello. Innanzitutto studieremo le correzioni al bispettro noto in letteratura dovute al fatto che in questo caso la velocita' del suono e' una funzione dipendente dal tempo; successivamente, cercheremo di calcolare analiticamente un ulteriore contributo al bispettro proporzionale alla derivata prima della velocita' del suono (che per il modello originale e' nullo).
Resumo:
I Big Data hanno forgiato nuove tecnologie che migliorano la qualità della vita utilizzando la combinazione di rappresentazioni eterogenee di dati in varie discipline. Occorre, quindi, un sistema realtime in grado di computare i dati in tempo reale. Tale sistema viene denominato speed layer, come si evince dal nome si è pensato a garantire che i nuovi dati siano restituiti dalle query funcions con la rapidità in cui essi arrivano. Il lavoro di tesi verte sulla realizzazione di un’architettura che si rifaccia allo Speed Layer della Lambda Architecture e che sia in grado di ricevere dati metereologici pubblicati su una coda MQTT, elaborarli in tempo reale e memorizzarli in un database per renderli disponibili ai Data Scientist. L’ambiente di programmazione utilizzato è JAVA, il progetto è stato installato sulla piattaforma Hortonworks che si basa sul framework Hadoop e sul sistema di computazione Storm, che permette di lavorare con flussi di dati illimitati, effettuando l’elaborazione in tempo reale. A differenza dei tradizionali approcci di stream-processing con reti di code e workers, Storm è fault-tolerance e scalabile. Gli sforzi dedicati al suo sviluppo da parte della Apache Software Foundation, il crescente utilizzo in ambito di produzione di importanti aziende, il supporto da parte delle compagnie di cloud hosting sono segnali che questa tecnologia prenderà sempre più piede come soluzione per la gestione di computazioni distribuite orientate agli eventi. Per poter memorizzare e analizzare queste moli di dati, che da sempre hanno costituito una problematica non superabile con i database tradizionali, è stato utilizzato un database non relazionale: HBase.
Resumo:
In the field of computer assisted orthopedic surgery (CAOS) the anterior pelvic plane (APP) is a common concept to determine the pelvic orientation by digitizing distinct pelvic landmarks. As percutaneous palpation is - especially for obese patients - known to be error-prone, B-mode ultrasound (US) imaging could provide an alternative means. Several concepts of using ultrasound imaging to determine the APP landmarks have been introduced. In this paper we present a novel technique, which uses local patch statistical shape models (SSMs) and a hierarchical speed of sound compensation strategy for an accurate determination of the APP. These patches are independently matched and instantiated with respect to associated point clouds derived from the acquired ultrasound images. Potential inaccuracies due to the assumption of a constant speed of sound are compensated by an extended reconstruction scheme. We validated our method with in-vitro studies using a plastic bone covered with a soft-tissue simulation phantom and with a preliminary cadaver trial.
Resumo:
In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.