832 resultados para Issued-based approach
Resumo:
Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.
Resumo:
The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.
Resumo:
Ziel der Arbeit ist die Entwicklung eines allgemeingültigen, im mittleren Maßstabsbereich durchgeführten, GIS-Bewertungsverfahrens zur Ermittlung der Grundwasserneubildungsrate unter Berücksichtigung einer langzeitlichen Landnutzungsänderung sowie die Darstellung eines Szenarios zur Grundwassergefährdungsabschätzung für ausgewählte rüstungsspezifi-sche organische Verbindungen. Das Untersuchungsgebiet befindet sich in einem während der beiden Weltkriege stark militär- und rüstungstechnisch genutzten Raum. Die Beurtei-lungsszenarien der Grundwassergefährdung erfolgen rein qualitativ.Die Abhängigkeit der Grundwasserneubildungsrate von der Landnutzungsänderung ist deut-lich zu erkennen. Mit zunehmender Flächenversiegelung, einher gehend mit veränderten Sickerwasserraten und des oberflächennahen Abflusses, reduziert sich die Grundwasser-neubildungsrate deutlich. Die Abschätzung der Grundwassergefährdung für die sprengstofftypischen Verbindungen erfolgt im ersten Schritt emissionsbezogen für die ungesättigte Zone unter Berücksichtigung der Standort- und Bodeneigenschaften und der physikalisch-chemischen Stoffeigenschaften. Im zweiten Schritt erfolgt eine immissionsbezogene Betrachtung der potenziellen Belastun-gen von Trinkwasserbrunnen.Trotz starker Generalisierungseffekte bietet dieser stoffspezifische Bewertungsansatz in Verbindung mit physikalischen Methoden für organische Verbindungen einen guten ersten Überblick über eine potenzielle Gefährdungen des Grundwassers. Die Datenbestände der entwickelten GIS-Anwendung sind leicht erweiterbar, so dass eine Abschätzung der potenzi-ellen Gefährdung auch für andere organische Stoffgruppen sowie auch eine Übertragung auf andere Untersuchungsgebiete möglich ist.
Resumo:
The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.
Resumo:
Con questa tesi abbiamo messo a punto una metodologia per l'applicazione del "corpus-based approach" allo studio dell'interpretazione simultanea, creando DIRSI-C, un corpus elettronico parallelo (italiano-inglese) e allineato di trascrizioni di registrazioni tratte da convegni medici, mediati da interpreti simultaneisti. Poiché gli interpreti professionisti coinvolti hanno lavorato dalla lingua straniera alla loro lingua materna e viceversa, il fattore direzionalità è il parametro di analisi delle prestazioni degli interpreti secondo i metodi di indagine della linguistica dei corpora. In this doctoral thesis a methodology was developed to fully apply the corpus-based approach to simultaneous interpreting research. DIRSI-C is a parallel (Italian-English/English-Italian) and aligned electronic corpus, containing transcripts of recorded medical international conferences with professional simultaneous interpreters working both from and into their foreign language. Against this backdrop, directionality represents the research parameter used to analyze interpreters' performance by means of corpus linguistics tools.
Resumo:
While the use of distributed intelligence has been incrementally spreading in the design of a great number of intelligent systems, the field of Artificial Intelligence in Real Time Strategy games has remained mostly a centralized environment. Despite turn-based games have attained AIs of world-class level, the fast paced nature of RTS games has proven to be a significant obstacle to the quality of its AIs. Chapter 1 introduces RTS games describing their characteristics, mechanics and elements. Chapter 2 introduces Multi-Agent Systems and the use of the Beliefs-Desires-Intentions abstraction, analysing the possibilities given by self-computing properties. In Chapter 3 the current state of AI development in RTS games is analyzed highlighting the struggles of the gaming industry to produce valuable. The focus on improving multiplayer experience has impacted gravely on the quality of the AIs thus leaving them with serious flaws that impair their ability to challenge and entertain players. Chapter 4 explores different aspects of AI development for RTS, evaluating the potential strengths and weaknesses of an agent-based approach and analysing which aspects can benefit the most against centralized AIs. Chapter 5 describes a generic agent-based framework for RTS games where every game entity becomes an agent, each of which having its own knowledge and set of goals. Different aspects of the game, like economy, exploration and warfare are also analysed, and some agent-based solutions are outlined. The possible exploitation of self-computing properties to efficiently organize the agents activity is then inspected. Chapter 6 presents the design and implementation of an AI for an existing Open Source game in beta development stage: 0 a.d., an historical RTS game on ancient warfare which features a modern graphical engine and evolved mechanics. The entities in the conceptual framework are implemented in a new agent-based platform seamlessly nested inside the existing game engine, called ABot, widely described in Chapters 7, 8 and 9. Chapter 10 and 11 include the design and realization of a new agent based language useful for defining behavioural modules for the agents in ABot, paving the way for a wider spectrum of contributors. Chapter 12 concludes the work analysing the outcome of tests meant to evaluate strategies, realism and pure performance, finally drawing conclusions and future works in Chapter 13.
Resumo:
The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.
Resumo:
This thesis deals with distributed control strategies for cooperative control of multi-robot systems. Specifically, distributed coordination strategies are presented for groups of mobile robots. The formation control problem is initially solved exploiting artificial potential fields. The purpose of the presented formation control algorithm is to drive a group of mobile robots to create a completely arbitrarily shaped formation. Robots are initially controlled to create a regular polygon formation. A bijective coordinate transformation is then exploited to extend the scope of this strategy, to obtain arbitrarily shaped formations. For this purpose, artificial potential fields are specifically designed, and robots are driven to follow their negative gradient. Artificial potential fields are then subsequently exploited to solve the coordinated path tracking problem, thus making the robots autonomously spread along predefined paths, and move along them in a coordinated way. Formation control problem is then solved exploiting a consensus based approach. Specifically, weighted graphs are used both to define the desired formation, and to implement collision avoidance. As expected for consensus based algorithms, this control strategy is experimentally shown to be robust to the presence of communication delays. The global connectivity maintenance issue is then considered. Specifically, an estimation procedure is introduced to allow each agent to compute its own estimate of the algebraic connectivity of the communication graph, in a distributed manner. This estimate is then exploited to develop a gradient based control strategy that ensures that the communication graph remains connected, as the system evolves. The proposed control strategy is developed initially for single-integrator kinematic agents, and is then extended to Lagrangian dynamical systems.
From fall-risk assessment to fall detection: inertial sensors in the clinical routine and daily life
Resumo:
Falls are caused by complex interaction between multiple risk factors which may be modified by age, disease and environment. A variety of methods and tools for fall risk assessment have been proposed, but none of which is universally accepted. Existing tools are generally not capable of providing a quantitative predictive assessment of fall risk. The need for objective, cost-effective and clinically applicable methods would enable quantitative assessment of fall risk on a subject-specific basis. Tracking objectively falls risk could provide timely feedback about the effectiveness of administered interventions enabling intervention strategies to be modified or changed if found to be ineffective. Moreover, some of the fundamental factors leading to falls and what actually happens during a fall remain unclear. Objectively documented and measured falls are needed to improve knowledge of fall in order to develop more effective prevention strategies and prolong independent living. In the last decade, several research groups have developed sensor-based automatic or semi-automatic fall risk assessment tools using wearable inertial sensors. This approach may also serve to detect falls. At the moment, i) several fall-risk assessment studies based on inertial sensors, even if promising, lack of a biomechanical model-based approach which could provide accurate and more detailed measurements of interests (e.g., joint moments, forces) and ii) the number of published real-world fall data of older people in a real-world environment is minimal since most authors have used simulations with healthy volunteers as a surrogate for real-world falls. With these limitations in mind, this thesis aims i) to suggest a novel method for the kinematics and dynamics evaluation of functional motor tasks, often used in clinics for the fall-risk evaluation, through a body sensor network and a biomechanical approach and ii) to define the guidelines for a fall detection algorithm based on a real-world fall database availability.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
La tesi sviluppa le proposte teoriche della Linguistica Cognitiva a proposito della metafora e propone una loro possibile applicazione in ambito didattico. La linguistica cognitiva costituisce la cornice interpretativa della ricerca, a partire dai suoi concetti principali: la prospettiva integrata, l’embodiment, la centralità della semantica, l’attenzione per la psicolinguistica e le neuroscienze. All’interno di questo panorama, prende vigore un’idea di metafora come punto d’incontro tra lingua e pensiero, come criterio organizzatore delle conoscenze, strumento conoscitivo fondamentale nei processi di apprendimento. A livello didattico, la metafora si rivela imprescindibile sia come strumento operativo che come oggetto di riflessione. L’approccio cognitivista può fornire utili indicazioni su come impostare un percorso didattico sulla metafora. Nel presente lavoro, si indaga in particolare l’uso didattico di stimoli non verbali nel rafforzamento delle competenze metaforiche di studenti di scuola media. Si è scelto come materiale di partenza la pubblicità, per due motivi: il diffuso impiego di strategie retoriche in ambito pubblicitario e la specificità comunicativa del genere, che permette una chiara disambiguazione di fenomeni che, in altri contesti, non potrebbero essere analizzati con la stessa univocità. Si presenta dunque un laboratorio finalizzato al miglioramento della competenza metaforica degli studenti che si avvale di due strategie complementari: da una parte, una spiegazione ispirata ai modelli cognitivisti, sia nella terminologia impiegata che nella modalità di analisi (di tipo usage-based); dall’altra un training con metafore visive in pubblicità, che comprende una fase di analisi e una fase di produzione. È stato usato un test, suddiviso in compiti specifici, per oggettivare il più possibile i progressi degli studenti alla fine del training, ma anche per rilevare le difficoltà e i punti di forza nell’analisi rispetto sia ai contesti d’uso (letterario e convenzionale) sia alle forme linguistiche assunte dalla metafora (nominale, verbale, aggettivale).
Resumo:
In this thesis three measurements of top-antitop differential cross section at an energy in the center of mass of 7 TeV will be shown, as a function of the transverse momentum, the mass and the rapidity of the top-antitop system. The analysis has been carried over a data sample of about 5/fb recorded with the ATLAS detector. The events have been selected with a cut based approach in the "one lepton plus jets" channel, where the lepton can be either an electron or a muon. The most relevant backgrounds (multi-jet QCD and W+jets) have been extracted using data driven methods; the others (Z+ jets, diboson and single top) have been simulated with Monte Carlo techniques. The final, background-subtracted, distributions have been corrected, using unfolding methods, for the detector and selection effects. At the end, the results have been compared with the theoretical predictions. The measurements are dominated by the systematic uncertainties and show no relevant deviation from the Standard Model predictions.
Resumo:
The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.
Resumo:
The central aim of this thesis work is the application and further development of a hybrid quantum mechanical/molecular mechanics (QM/MM) based approach to compute spectroscopic properties of molecules in complex chemical environments from electronic structure theory. In the framework of this thesis, an existing density functional theory implementation of the QM/MM approach is first used to calculate the nuclear magnetic resonance (NMR) solvent shifts of an adenine molecule in aqueous solution. The findings show that the aqueous solvation with its strongly fluctuating hydrogen bond network leads to specific changes in the NMR resonance lines. Besides the absolute values, also the ordering of the NMR lines changes under the influence of the solvating water molecules. Without the QM/MM scheme, a quantum chemical calculation could have led to an incorrect assignment of these lines. The second part of this thesis describes a methodological improvement of the QM/MM method that is designed for cases in which a covalent chemical bond crosses the QM/MM boundary. The development consists in an automatized protocol to optimize a so-called capping potential that saturates the electronic subsystem in the QM region. The optimization scheme is capable of tuning the parameters in such a way that the deviations of the electronic orbitals between the regular and the truncated (and "capped") molecule are minimized. This in turn results in a considerable improvement of the structural and spectroscopic parameters when computed with the new optimized capping potential within the QM/MM technique. This optimization scheme is applied and benchmarked on the example of truncated carbon-carbon bonds in a set of small test molecules. It turns out that the optimized capping potentials yield an excellent agreement of NMR chemical shifts and protonation energies with respect to the corresponding full molecules. These results are very promising, so that the application to larger biological complexes will significantly improve the reliability of the prediction of the related spectroscopic properties.
Resumo:
Il presente lavoro ha lo scopo di presentare gli studi e i risultati ottenuti durante l’attività di ricerca svolta sul Displacement-based Assessment (DBA) dei telai in cemento armato. Dopo alcune considerazioni iniziali sul tema della vulnerabilità sismica e sui metodi di analisi e verifica, si procede alla descrizione teorica del metodo. Sono stati analizzati tre casi studio di telai piani, progettati per soli carichi verticali e secondo normative non più in vigore che non prevedevano l’applicazione della gerarchia delle resistenze. I telai considerati, destinati ad abitazione civile, hanno diversa altezza e numero di piani, e diverso numero di campate. Si è proceduto all’applicazione del metodo, alla valutazione della vulnerabilità sismica in base alla domanda in termini di spostamento costituita da uno spettro elastico previsto dall’EC8 e alla validazione dei risultati ottenuti mediante analisi non lineari statiche e dinamiche e mediante l’applicazione dei teoremi dell’Analisi limite dei telai, proposta come procedura alternativa per la determinazione del meccanismo anelastico e della capacità in termini di taglio alla base. In ultimo si è applicata la procedura DBA per la valutazione della vulnerabilità sismica di un edificio scolastico, realizzato tra il 1969 e il 1975 in un sito caratterizzato da una accelerazione di picco orizzontale pari a 0,24g e una probabilità di superamento del 10% in 75 anni.