946 resultados para Elastic programming
Resumo:
The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.
Resumo:
Tesi riguardante la creazione di tutte le risorse grafiche necessarie ad un videogioco tridimensionale in prima persona con Blender e Unity3D. Gli argomenti trattati sono: prgettazione, 3D modeling, texturing e shading.
Resumo:
La tesi riguarda tutto il processo di progettazione di un videogioco e l'implementazione dello stesso. Gli argomenti trattati sono: Unity, Design & Gameplay e l'implementazioni del progetto.
Resumo:
The present thesis focuses on elastic waves behaviour in ordinary structures as well as in acousto-elastic metamaterials via numerical and experimental applications. After a brief introduction on the behaviour of elastic guided waves in the framework of non-destructive evaluation (NDE) and structural health monitoring (SHM) and on the study of elastic waves propagation in acousto-elastic metamaterials, dispersion curves for thin-walled beams and arbitrary cross-section waveguides are extracted via Semi-Analytical Finite Element (SAFE) methods. Thus, a novel strategy tackling signal dispersion to locate defects in irregular waveguides is proposed and numerically validated. Finally, a time-reversal and laser-vibrometry based procedure for impact location is numerically and experimentally tested. In the second part, an introduction and a brief review of the basic definitions necessary to describe acousto-elastic metamaterials is provided. A numerical approach to extract dispersion properties in such structures is highlighted. Afterwards, solid-solid and solid-fluid phononic systems are discussed via numerical applications. In particular, band structures and transmission power spectra are predicted for 1P-2D, 2P-2D and 2P-3D phononic systems. In addition, attenuation bands in the ultrasonic as well as in the sonic frequency regimes are experimentally investigated. In the experimental validation, PZTs in a pitch-catch configuration and laser vibrometric measurements are performed on a PVC phononic plate in the ultrasonic frequency range and sound insulation index is computed for a 2P-3D phononic barrier in the sonic frequency range. In both cases the numerical-experimental results comparison confirms the existence of the numerical predicted band-gaps. Finally, the feasibility of an innovative passive isolation strategy based on giant elastic metamaterials is numerically proved to be practical for civil structures. In particular, attenuation of seismic waves is demonstrated via finite elements analyses. Further, a parametric study shows that depending on the soil properties, such an earthquake-proof barrier could lead to significant reduction of the superstructure displacement.
Resumo:
Recent research has shown that the performance of a single, arbitrarily efficient algorithm can be significantly outperformed by using a portfolio of —possibly on-average slower— algorithms. Within the Constraint Programming (CP) context, a portfolio solver can be seen as a particular constraint solver that exploits the synergy between the constituent solvers of its portfolio for predicting which is (or which are) the best solver(s) to run for solving a new, unseen instance. In this thesis we examine the benefits of portfolio solvers in CP. Despite portfolio approaches have been extensively studied for Boolean Satisfiability (SAT) problems, in the more general CP field these techniques have been only marginally studied and used. We conducted this work through the investigation, the analysis and the construction of several portfolio approaches for solving both satisfaction and optimization problems. We focused in particular on sequential approaches, i.e., single-threaded portfolio solvers always running on the same core. We started from a first empirical evaluation on portfolio approaches for solving Constraint Satisfaction Problems (CSPs), and then we improved on it by introducing new data, solvers, features, algorithms, and tools. Afterwards, we addressed the more general Constraint Optimization Problems (COPs) by implementing and testing a number of models for dealing with COP portfolio solvers. Finally, we have come full circle by developing sunny-cp: a sequential CP portfolio solver that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers.
Resumo:
Die Kapillarkraft entsteht durch die Bildung eines Meniskus zwischen zwei Festkörpen. In dieser Doktorarbeit wurden die Auswirkungen von elastischer Verformung und Flϋssigkeitadsorption auf die Kapillarkraft sowohl theoretisch als auch experimentell untersucht. Unter Verwendung eines Rasterkraftmikroskops wurde die Kapillarkraft zwischen eines Siliziumoxid Kolloids von 2 µm Radius und eine weiche Oberfläche wie n.a. Polydimethylsiloxan oder Polyisopren, unter normalen Umgebungsbedingungen sowie in variierende Ethanoldampfdrϋcken gemessen. Diese Ergebnisse wurden mit den Kapillarkräften verglichen, die auf einem harten Substrat (Silizium-Wafer) unter denselben Bedingungen gemessen wurden. Wir beobachteten eine monotone Abnahme der Kapillarkraft mit zunehmendem Ethanoldampfdruck (P) fϋr P/Psat > 0,2, wobei Psat der Sättigungsdampfdruck ist.rnUm die experimentellen Ergebnisse zu erklären, wurde ein zuvor entwickeltes analytisches Modell (Soft Matter 2010, 6, 3930) erweitert, um die Ethanoladsorption zu berϋcksichtigen. Dieses neue analytische Modell zeigte zwei verschiedene Abhängigkeiten der Kapillarkraft von P/Psat auf harten und weichen Oberflächen. Fϋr die harte Oberfläche des Siliziumwafers wird die Abhängigkeit der Kapillarkraft vom Dampfdruck vom Verhältnis der Dicke der adsorbierten Ethanolschicht zum Meniskusradius bestimmt. Auf weichen Polymeroberflächen hingegen hängt die Kapillarkraft von der Oberflächenverformung und des Laplace-Drucks innerhalb des Meniskus ab. Eine Abnahme der Kapillarkraft mit zunehmendem Ethanoldampfdruck hat demnach eine Abnahme des Laplace-Drucks mit zunehmendem Meniskusradius zur folge. rnDie analytischen Berechnungen, fϋr die eine Hertzsche Kontakt-deformation angenommen wurde, wurden mit Finit Element Methode Simulationen verglichen, welche die reale Deformation des elastischen Substrats in der Nähe des Meniskuses explizit berϋcksichtigen. Diese zusätzliche nach oben gerichtete oberflächenverformung im Bereich des Meniskus fϋhrt zu einer weiteren Erhöhung der Kapillarkraft, insbesondere fϋr weiche Oberflächen mit Elastizitätsmodulen < 100 MPa.rn
Resumo:
Le persone che soffrono di insufficienza renale terminale hanno due possibili trattamenti da affrontare: la dialisi oppure il trapianto di organo. Nel caso volessero seguire la seconda strada, oltre che essere inseriti nella lista d'attesa dei donatori deceduti, possono trovare una persona, come il coniuge, un parente o un amico, disposta a donare il proprio rene. Tuttavia, non sempre il trapianto è fattibile: donatore e ricevente possono, infatti, presentare delle incompatibilità a livello di gruppo sanguigno o di tessuto organico. Come risposta a questo tipo di problema nasce il KEP (Kidney Exchange Program), un programma, ampiamente avviato in diverse realtà europee e mondiali, che si occupa di raggruppare in un unico insieme le coppie donatore/ricevente in questa stessa situazione al fine di operare e massimizzare scambi incrociati di reni fra coppie compatibili. Questa tesi approffondisce tale questione andando a valutare la possibilità di unire in un unico insieme internazionale coppie donatore/ricevente provenienti da più paesi. Lo scopo, naturalmente, è quello di poter ottenere un numero sempre maggiore di trapianti effettuati. Lo studio affronta dal punto di vista matematico problematiche legate a tale collaborazione: i paesi che eventualmente accettassero di partecipare a un simile programma, infatti, devono avere la garanzia non solo di ricavarne un vantaggio, ma anche che tale vantaggio sia equamente distribuito fra tutti i paesi partecipanti.
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
After almost 10 years from “The Free Lunch Is Over” article, where the need to parallelize programs started to be a real and mainstream issue, a lot of stuffs did happened: • Processor manufacturers are reaching the physical limits with most of their approaches to boosting CPU performance, and are instead turning to hyperthreading and multicore architectures; • Applications are increasingly need to support concurrency; • Programming languages and systems are increasingly forced to deal well with concurrency. This thesis is an attempt to propose an overview of a paradigm that aims to properly abstract the problem of propagating data changes: Reactive Programming (RP). This paradigm proposes an asynchronous non-blocking approach to concurrency and computations, abstracting from the low-level concurrency mechanisms.
Resumo:
La programmazione aggregata è un paradigma che supporta la programmazione di sistemi di dispositivi, adattativi ed eventualmente a larga scala, nel loro insieme -- come aggregati. L'approccio prevalente in questo contesto è basato sul field calculus, un calcolo formale che consente di definire programmi aggregati attraverso la composizione funzionale di campi computazionali, creando i presupposti per la specifica di pattern di auto-organizzazione robusti. La programmazione aggregata è attualmente supportata, in modo più o meno parziale e principalmente per la simulazione, da DSL dedicati (cf., Protelis), ma non esistono framework per linguaggi mainstream finalizzati allo sviluppo di applicazioni. Eppure, un simile supporto sarebbe auspicabile per ridurre tempi e sforzi d'adozione e per semplificare l'accesso al paradigma nella costruzione di sistemi reali, nonché per favorire la ricerca stessa nel campo. Il presente lavoro consiste nello sviluppo, a partire da un prototipo della semantica operazionale del field calculus, di un framework per la programmazione aggregata in Scala. La scelta di Scala come linguaggio host nasce da motivi tecnici e pratici. Scala è un linguaggio moderno, interoperabile con Java, che ben integra i paradigmi ad oggetti e funzionale, ha un sistema di tipi espressivo, e fornisce funzionalità avanzate per lo sviluppo di librerie e DSL. Inoltre, la possibilità di appoggiarsi, su Scala, ad un framework ad attori solido come Akka, costituisce un altro fattore trainante, data la necessità di colmare l'abstraction gap inerente allo sviluppo di un middleware distribuito. Nell'elaborato di tesi si presenta un framework che raggiunge il triplice obiettivo: la costruzione di una libreria Scala che realizza la semantica del field calculus in modo corretto e completo, la realizzazione di una piattaforma distribuita Akka-based su cui sviluppare applicazioni, e l'esposizione di un'API generale e flessibile in grado di supportare diversi scenari.
Resumo:
L'obiettivo di questa tesi è analizzare e testare la programmazione reattiva, paradigma di programmazione particolarmente adatto per lo sviluppo di applicazioni altamente interattive. La progettazione di sistemi reattivi implica necessariamente l'utilizzo di codice asincrono e la programmazione reattiva (RP) offre al programmatore semplici meccanismi per gestirlo. In questa tesi, la programmazione reattiva è stata utilizzata e valutata mediante la realizzazione di un progetto real-world chiamato AvvocaTimer. Verrà affrontata la progettazione, implementazione e collaudo di una parte del sistema attraverso l'approccio reattivo e, successivamente, confrontata con la prima versione, realizzata con i metodi attualmente usati per gestire codice asincrono, per analizzare vantaggi e/o svantaggi derivanti dall'utilizzo del nuovo paradigma.
Resumo:
La tesi è calata nell'ambito dell'Aggregate Programming e costituita da una prima parte introduttiva su questo ambito, per poi concentrarsi sulla descrizione degli elaborati prodotti e infine qualche nota conclusiva unitamente a qualche possibile sviluppo futuro. La parte progettuale consiste nell'integrazione del framework Scafi con il simulatore Alchemist e con una piattaforma di creazione e di esecuzione di sistemi in ambito Spatial Computin, con lo scopo di potenziare la toolchain esistente per Aggregate Programming. Inoltre si riporta anche un breve capitolo per l'esecuzione del framework scafi sviluppato in scala sulla piattaforma Android.
Resumo:
When healthy observers make a saccade that is erroneously directed toward a distracter stimulus, they often produce a corrective saccade within 100ms after the end of the primary saccade. Such short inter-saccadic intervals indicate that programming of the secondary saccade has been initiated prior to the execution of the primary saccade and hence that the two saccades have been programmed concurrently. Here we show that concurrent saccade programming is bilaterally impaired in left spatial neglect, a strongly lateralized disorder of visual attention resulting from extensive right cerebral damage. Neglect patients were asked to make saccades to targets presented left or right of fixation while disregarding a distracter presented in the opposite hemifield. We examined those experimental trials on which participants first made a saccade to the distracter, followed by a secondary (corrective) saccade to the target. Compared to healthy and right-hemisphere damaged control participants the proportion of secondary saccades directing gaze to the target instead of bringing it even closer to the distracter was bilaterally reduced in neglect patients. In addition, the characteristic reduction of secondary saccade latency observed in both control groups was absent in neglect patients, whether the secondary saccade was directed to the left or right hemifield. This pattern is consistent with a severe, bilateral impairment of concurrent saccade programming in left spatial neglect.
Knowing the future: partial foreknowledge effects on the programming of prosaccades and antisaccades
Resumo:
Foreknowledge about the demands of an upcoming trial may be exploited to optimize behavioural responses. In the current study we systematically investigated the benefits of partial foreknowledge--that is, when some but not all aspects of a future trial are known in advance. For this we used an ocular motor paradigm with horizontal prosaccades and antisaccades. Predictable sequences were used to create three partial foreknowledge conditions: one with foreknowledge about the stimulus location only, one with foreknowledge about the task set only, and one with foreknowledge about the direction of the required response only. These were contrasted with a condition of no-foreknowledge and a condition of complete foreknowledge about all three parameters. The results showed that the three types of foreknowledge affected saccadic efficiency differently. While foreknowledge about stimulus-location had no effect on efficiency, task foreknowledge had some effect and response-foreknowledge was as effective as complete foreknowledge. Foreknowledge effects on switch costs followed a similar pattern in general, but were not specific for switching of the trial attribute for which foreknowledge was available. We conclude that partial foreknowledge has a differential effect on efficiency, most consistent with preparatory activation of a motor schema in advance of the stimulus, with consequent benefits for both switched and repeated trials.