912 resultados para Dwarf Galaxy Fornax Distribution Function Action Based
Resumo:
CAPITOLO 1 INTRODUZIONE Il lavoro presentato è relativo all’utilizzo a fini metrici di immagini satellitari storiche a geometria panoramica; in particolare sono state elaborate immagini satellitari acquisite dalla piattaforma statunitense CORONA, progettata ed impiegata essenzialmente a scopi militari tra gli anni ’60 e ’70 del secolo scorso, e recentemente soggette ad una declassificazione che ne ha consentito l’accesso anche a scopi ed utenti non militari. Il tema del recupero di immagini aeree e satellitari del passato è di grande interesse per un ampio spettro di applicazioni sul territorio, dall’analisi dello sviluppo urbano o in ambito regionale fino ad indagini specifiche locali relative a siti di interesse archeologico, industriale, ambientale. Esiste infatti un grandissimo patrimonio informativo che potrebbe colmare le lacune della documentazione cartografica, di per sé, per ovvi motivi tecnici ed economici, limitata a rappresentare l’evoluzione territoriale in modo asincrono e sporadico, e con “forzature” e limitazioni nel contenuto informativo legate agli scopi ed alle modalità di rappresentazione delle carte nel corso del tempo e per diversi tipi di applicazioni. L’immagine di tipo fotografico offre una rappresentazione completa, ancorché non soggettiva, dell’esistente e può complementare molto efficacemente il dato cartografico o farne le veci laddove questo non esista. La maggior parte del patrimonio di immagini storiche è certamente legata a voli fotogrammetrici che, a partire dai primi decenni del ‘900, hanno interessato vaste aree dei paesi più avanzati, o regioni di interesse a fini bellici. Accanto a queste, ed ovviamente su periodi più vicini a noi, si collocano le immagini acquisite da piattaforma satellitare, tra le quali rivestono un grande interesse quelle realizzate a scopo di spionaggio militare, essendo ad alta risoluzione geometrica e di ottimo dettaglio. Purtroppo, questo ricco patrimonio è ancora oggi in gran parte inaccessibile, anche se recentemente sono state avviate iniziative per permetterne l’accesso a fini civili, in considerazione anche dell’obsolescenza del dato e della disponibilità di altre e migliori fonti di informazione che il moderno telerilevamento ci propone. L’impiego di immagini storiche, siano esse aeree o satellitari, è nella gran parte dei casi di carattere qualitativo, inteso ad investigare sulla presenza o assenza di oggetti o fenomeni, e di rado assume un carattere metrico ed oggettivo, che richiederebbe tra l’altro la conoscenza di dati tecnici (per esempio il certificato di calibrazione nel caso delle camere aerofotogrammetriche) che sono andati perduti o sono inaccessibili. Va ricordato anche che i mezzi di presa dell’epoca erano spesso soggetti a fenomeni di distorsione ottica o altro tipo di degrado delle immagini che ne rendevano difficile un uso metrico. D’altra parte, un utilizzo metrico di queste immagini consentirebbe di conferire all’analisi del territorio e delle modifiche in esso intercorse anche un significato oggettivo che sarebbe essenziale per diversi scopi: per esempio, per potere effettuare misure su oggetti non più esistenti o per potere confrontare con precisione o co-registrare le immagini storiche con quelle attuali opportunamente georeferenziate. Il caso delle immagini Corona è molto interessante, per una serie di specificità che esse presentano: in primo luogo esse associano ad una alta risoluzione (dimensione del pixel a terra fino a 1.80 metri) una ampia copertura a terra (i fotogrammi di alcune missioni coprono strisce lunghe fino a 250 chilometri). Queste due caratteristiche “derivano” dal principio adottato in fase di acquisizione delle immagini stesse, vale a dire la geometria panoramica scelta appunto perché l’unica che consente di associare le due caratteristiche predette e quindi molto indicata ai fini spionaggio. Inoltre, data la numerosità e la frequenza delle missioni all’interno dell’omonimo programma, le serie storiche di questi fotogrammi permettono una ricostruzione “ricca” e “minuziosa” degli assetti territoriali pregressi, data appunto la maggior quantità di informazioni e l’imparzialità associabili ai prodotti fotografici. Va precisato sin dall’inizio come queste immagini, seppur rappresentino una risorsa “storica” notevole (sono datate fra il 1959 ed il 1972 e coprono regioni moto ampie e di grandissimo interesse per analisi territoriali), siano state molto raramente impiegate a scopi metrici. Ciò è probabilmente imputabile al fatto che il loro trattamento a fini metrici non è affatto semplice per tutta una serie di motivi che saranno evidenziati nei capitoli successivi. La sperimentazione condotta nell’ambito della tesi ha avuto due obiettivi primari, uno generale ed uno più particolare: da un lato il tentativo di valutare in senso lato le potenzialità dell’enorme patrimonio rappresentato da tali immagini (reperibili ad un costo basso in confronto a prodotti simili) e dall’altro l’opportunità di indagare la situazione territoriale locale per una zona della Turchia sud orientale (intorno al sito archeologico di Tilmen Höyük) sulla quale è attivo un progetto condotto dall’Università di Bologna (responsabile scientifico il Prof. Nicolò Marchetti del Dipartimento di Archeologia), a cui il DISTART collabora attivamente dal 2005. L’attività è condotta in collaborazione con l’Università di Istanbul ed il Museo Archeologico di Gaziantep. Questo lavoro si inserisce, inoltre, in un’ottica più ampia di quelle esposta, dello studio cioè a carattere regionale della zona in cui si trovano gli scavi archeologici di Tilmen Höyük; la disponibilità di immagini multitemporali su un ampio intervallo temporale, nonché di tipo multi sensore, con dati multispettrali, doterebbe questo studio di strumenti di conoscenza di altissimo interesse per la caratterizzazione dei cambiamenti intercorsi. Per quanto riguarda l’aspetto più generale, mettere a punto una procedura per il trattamento metrico delle immagini CORONA può rivelarsi utile all’intera comunità che ruota attorno al “mondo” dei GIS e del telerilevamento; come prima ricordato tali immagini (che coprono una superficie di quasi due milioni di chilometri quadrati) rappresentano un patrimonio storico fotografico immenso che potrebbe (e dovrebbe) essere utilizzato sia a scopi archeologici, sia come supporto per lo studio, in ambiente GIS, delle dinamiche territoriali di sviluppo di quelle zone in cui sono scarse o addirittura assenti immagini satellitari dati cartografici pregressi. Il lavoro è stato suddiviso in 6 capitoli, di cui il presente costituisce il primo. Il secondo capitolo è stato dedicato alla descrizione sommaria del progetto spaziale CORONA (progetto statunitense condotto a scopo di fotoricognizione del territorio dell’ex Unione Sovietica e delle aree Mediorientali politicamente correlate ad essa); in questa fase vengono riportate notizie in merito alla nascita e all’evoluzione di tale programma, vengono descritti piuttosto dettagliatamente gli aspetti concernenti le ottiche impiegate e le modalità di acquisizione delle immagini, vengono riportati tutti i riferimenti (storici e non) utili a chi volesse approfondire la conoscenza di questo straordinario programma spaziale. Nel terzo capitolo viene presentata una breve discussione in merito alle immagini panoramiche in generale, vale a dire le modalità di acquisizione, gli aspetti geometrici e prospettici alla base del principio panoramico, i pregi ed i difetti di questo tipo di immagini. Vengono inoltre presentati i diversi metodi rintracciabili in bibliografia per la correzione delle immagini panoramiche e quelli impiegati dai diversi autori (pochi per la verità) che hanno scelto di conferire un significato metrico (quindi quantitativo e non solo qualitativo come è accaduto per lungo tempo) alle immagini CORONA. Il quarto capitolo rappresenta una breve descrizione del sito archeologico di Tilmen Höyuk; collocazione geografica, cronologia delle varie campagne di studio che l’hanno riguardato, monumenti e suppellettili rinvenute nell’area e che hanno reso possibili una ricostruzione virtuale dell’aspetto originario della città ed una più profonda comprensione della situazione delle capitali del Mediterraneo durante il periodo del Bronzo Medio. Il quinto capitolo è dedicato allo “scopo” principe del lavoro affrontato, vale a dire la generazione dell’ortofotomosaico relativo alla zona di cui sopra. Dopo un’introduzione teorica in merito alla produzione di questo tipo di prodotto (procedure e trasformazioni utilizzabili, metodi di interpolazione dei pixel, qualità del DEM utilizzato), vengono presentati e commentati i risultati ottenuti, cercando di evidenziare le correlazioni fra gli stessi e le problematiche di diversa natura incontrate nella redazione di questo lavoro di tesi. Nel sesto ed ultimo capitolo sono contenute le conclusioni in merito al lavoro in questa sede presentato. Nell’appendice A vengono riportate le tabelle dei punti di controllo utilizzati in fase di orientamento esterno dei fotogrammi.
Resumo:
In this thesis we focussed on the characterization of the reaction center (RC) protein purified from the photosynthetic bacterium Rhodobacter sphaeroides. In particular, we discussed the effects of native and artificial environment on the light-induced electron transfer processes. The native environment consist of the inner antenna LH1 complex that copurifies with the RC forming the so called core complex, and the lipid phase tightly associated with it. In parallel, we analyzed the role of saccharidic glassy matrices on the interplay between electron transfer processes and internal protein dynamics. As a different artificial matrix, we incorporated the RC protein in a layer-by-layer structure with a twofold aim: to check the behaviour of the protein in such an unusual environment and to test the response of the system to herbicides. By examining the RC in its native environment, we found that the light-induced charge separated state P+QB - is markedly stabilized (by about 40 meV) in the core complex as compared to the RC-only system over a physiological pH range. We also verified that, as compared to the average composition of the membrane, the core complex copurifies with a tightly bound lipid complement of about 90 phospholipid molecules per RC, which is strongly enriched in cardiolipin. In parallel, a large ubiquinone pool was found in association with the core complex, giving rise to a quinone concentration about ten times larger than the average one in the membrane. Moreover, this quinone pool is fully functional, i.e. it is promptly available at the QB site during multiple turnover excitation of the RC. The latter two observations suggest important heterogeneities and anisotropies in the native membranes which can in principle account for the stabilization of the charge separated state in the core complex. The thermodynamic and kinetic parameters obtained in the RC-LH1 complex are very close to those measured in intact membranes, indicating that the electron transfer properties of the RC in vivo are essentially determined by its local environment. The studies performed by incorporating the RC into saccharidic matrices evidenced the relevance of solvent-protein interactions and dynamical coupling in determining the kinetics of electron transfer processes. The usual approach when studying the interplay between internal motions and protein function consists in freezing the degrees of freedom of the protein at cryogenic temperature. We proved that the “trehalose approach” offers distinct advantages with respect to this traditional methodology. We showed, in fact, that the RC conformational dynamics, coupled to specific electron transfer processes, can be modulated by varying the hydration level of the trehalose matrix at room temperature, thus allowing to disentangle solvent from temperature effects. The comparison between different saccharidic matrices has revealed that the structural and dynamical protein-matrix coupling depends strongly upon the sugar. The analyses performed in RCs embedded in polyelectrolyte multilayers (PEM) structures have shown that the electron transfer from QA - to QB, a conformationally gated process extremely sensitive to the RC environment, can be strongly modulated by the hydration level of the matrix, confirming analogous results obtained for this electron transfer reaction in sugar matrices. We found that PEM-RCs are a very stable system, particularly suitable to study the thermodynamics and kinetics of herbicide binding to the QB site. These features make PEM-RC structures quite promising in the development of herbicide biosensors. The studies discussed in the present thesis have shown that, although the effects on electron transfer induced by the native and artificial environments tested are markedly different, they can be described on the basis of a common kinetic model which takes into account the static conformational heterogeneity of the RC and the interconversion between conformational substates. Interestingly, the same distribution of rate constants (i.e. a Gamma distribution function) can describe charge recombination processes in solutions of purified RC, in RC-LH1 complexes, in wet and dry RC-PEM structures and in glassy saccharidic matrices over a wide range of hydration levels. In conclusion, the results obtained for RCs in different physico-chemical environments emphasize the relevance of the structure/dynamics solvent/protein coupling in determining the energetics and the kinetics of electron transfer processes in a membrane protein complex.
Resumo:
Many efforts have been devoting since last years to reduce uncertainty in hydrological modeling predictions. The principal sources of uncertainty are provided by input errors, for inaccurate rainfall prediction, and model errors, given by the approximation with which the water flow processes in the soil and river discharges are described. The aim of the present work is to develop a bayesian model in order to reduce the uncertainty in the discharge predictions for the Reno river. The ’a priori’ distribution function is given by an autoregressive model, while the likelihood function is provided by a linear equation which relates observed values of discharge in the past and hydrological TOPKAPI model predictions obtained by the rainfall predictions of the limited-area model COSMO-LAMI. The ’a posteriori’ estimations are provided throw a H∞ filter, because the statistical properties of estimation errors are not known. In this work a stationary and a dual adaptive filter are implemented and compared. Statistical analysis of estimation errors and the description of three case studies of flood events occurred during the fall seasons from 2003 to 2005 are reported. Results have also revealed that errors can be described as a markovian process only at a first approximation. For the same period, an ensemble of ’a posteriori’ estimations is obtained throw the COSMO-LEPS rainfall predictions, but the spread of this ’a posteriori’ ensemble is not enable to encompass observation variability. This fact is related to the building of the meteorological ensemble, whose spread reaches its maximum after 5 days. In the future the use of a new ensemble, COSMO–SREPS, focused on the first 3 days, could be helpful to enlarge the meteorogical and, consequently, the hydrological variability.
Resumo:
Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.
Resumo:
In dieser Arbeit wird der Orientierungsglasübergang ungeordneter, molekularer Kristalle untersucht. Die theoretische Behandlung ist durch die Anisotropie der Einteilchen-Verteilungsfunktion und der Paarfunktionen erschwert. Nimmt man ein starres Gitter, wird der reziproke Raum im Gegenzug auf die 1. Brillouin-Zone eingeschränkt. Der Orientierungsglasübergang wird im Rahmen der Modenkopplungsgleichungen studiert, die dazu hergeleitet werden. Als Modell dienen harte Rotationsellipsoide auf einem starren sc Gitter. Zur Berechnung der statischen tensoriellen Strukturfaktoren wird die Ornstein-Zernike(OZ)-Gleichung molekularer Kristalle abgeleitet und selbstkonsistent zusammen mit der von molekularen Flüssigkeiten übernommenen Percus-Yevick(PY)-Näherung gelöst. Parallel dazu werden die Strukturfaktoren durch MC-Simulationen ermittelt. Die OZ-Gleichung molekularer Kristalle ähnelt der von Flüssigkeiten, direkte und totale Korrelationsfunktion kommen jedoch wegen des starren Gitters nur ohne Konstantanteile in den Winkelvariablen vor, im Gegensatz zur PY-Näherung. Die Anisotropie bringt außerdem einen nichttrivialen Zusatzfaktor. OZ/PY-Strukturfaktoren und MC-Ergebnisse stimmen gut überein. Bei den Matrixelementen der Dichte-Dichte-Korrelationsfunktion gibt es drei Hauptverläufe: oszillatorisch, monoton und unregelmäßig abfallend. Oszillationen gehören zu alternierenden Dichtefluktuationen, führen zu Maxima der Strukturfaktoren am Zonenrand und kommen bei oblaten und genügend breiten prolaten, schwächer auch bei dünnen, nicht zu langen prolaten Ellipsoiden vor. Der exponentielle monotone Abfall kommt bei allen Ellipsoiden vor und führt zu Maxima der Strukturfaktoren in der Zonenmitte, was die Tendenz zu nematischer Ordnung zeigt. Die OZ/PY-Theorie ist durch divergierende Maxima der Strukturfaktoren begrenzt. Bei den Modenkopplungsgleichungen molekularer Kristalle zeigt sich eine große Ähnlichkeit mit denen molekularer Flüssigkeiten, jedoch spielen auf einem starrem Gitter nur die Matrixelemente mit l,l' > 0 eine Rolle und es finden Umklapps von reziproken Vektoren statt. Die Anisotropie bringt auch hier nichtkonstante Zusatzfaktoren ins Spiel. Bis auf flache oblate Ellipsoide wird die Modenkopplungs-Glaslinie von der Divergenz der Strukturfaktoren bestimmt. Für sehr lange Ellipsoide müssen die Strukturfaktoren zur Divergenz hin extrapoliert werden. Daher treibt nicht der Orientierungskäfigeffekt den Glasübergang, sondern Fluktuationen an einer Phasengrenze. Nahe der Kugelform ist keine zuverlässige Glasline festlegbar. Die eingefrorenen kritischen Dichte-Dichte-Korrelatoren haben nur in wenigen Fällen die Oszillationen der statischen Korrelatoren. Der monotone Abfall bleibt dagegen für lange Zeiten meist erhalten. Folglich haben die kritischen Modenkopplungs-Nichtergodizitätsparameter abgeschwächte Maxima in der Zonenmitte, während die Maxima am Zonenrand meist verschwunden sind. Die normierten Nichtergodizitätsparameter zeigen eine Fülle von Verläufen, besonders tiefer im Glas.
Resumo:
Within this PhD thesis several methods were developed and validated which can find applicationare suitable for environmental sample and material science and should be applicable for monitoring of particular radionuclides and the analysis of the chemical composition of construction materials in the frame of ESS project. The study demonstrated that ICP-MS is a powerful analytical technique for ultrasensitive determination of 129I, 90Sr and lanthanides in both artificial and environmental samples such as water and soil. In particular ICP-MS with collision cell allows measuring extremely low isotope ratios of iodine. It was demonstrated that isotope ratios of 129I/127I as low as 10-7 can be measured with an accuracy and precision suitable for distinguishing sample origins. ICP-MS with collision cell, in particular in combination with cool plasma conditions, reduces the influence of isobaric interferences on m/z = 90 and is therefore well-suited for 90Sr analysis in water samples. However, the applied ICP-CC-QMS in this work is limited for the measurement of 90Sr due to the tailing of 88Sr+ and in particular Daly detector noise. Hyphenation of capillary electrophoresis with ICP-MS was shown to resolve atomic ions of all lanthanides and polyatomic interferences. The elimination of polyatomic and isobaric ICP-MS interferences was accomplished without compromising the sensitivity by the use of a high resolution mode as available on ICP-SFMS. Combination of laser ablation with ICP-MS allowed direct micro and local uranium isotope ratio measurements at the ultratrace concentrations on the surface of biological samples. In particular, the application of a cooled laser ablation chamber improves the precision and accuracy of uranium isotopic ratios measurements in comparison to the non-cooled laser ablation chamber by up to one order of magnitude. In order to reduce the quantification problem, a mono gas on-line solution-based calibration was built based on the insertion of a microflow nebulizer DS-5 directly into the laser ablation chamber. A micro local method to determine the lateral element distribution on NiCrAlY-based alloy and coating after oxidation in air was tested and validated. Calibration procedures involving external calibration, quantification by relative sensitivity coefficients (RSCs) and solution-based calibration were investigated. The analytical method was validated by comparison of the LA-ICP-MS results with data acquired by EDX.
Resumo:
This thesis is concerned with the adsorption and detachment of polymers at planar, rigid surfaces. We have carried out a systematic investigation of adsorption of polymers using analytical techniques as well as Monte Carlo simulations with a coarse grained off-lattice bead spring model. The investigation was carried out in three stages. In the first stage the adsorption of a single multiblock AB copolymer on a solid surface was investigated by means of simulations and scaling analysis. It was shown that the problem could be mapped onto an effective homopolymer problem. Our main result was the phase diagram of regular multiblock copolymers which shows an increase in the critical adsorption potential of the substrate with decreasing size of blocks. We also considered the adsorption of random copolymers which was found to be well described within the annealed disorder approximation. In the next phase, we studied the adsorption kinetics of a single polymer on a flat, structureless surface in the regime of strong physisorption. The idea of a ’stem-flower’ polymer conformation and the mechanism of ’zipping’ during the adsorption process were used to derive a Fokker-Planck equation with reflecting boundary conditions for the time dependent probability distribution function (PDF) of the number of adsorbed monomers. The numerical solution of the time-dependent PDF obtained from a discrete set of coupled differential equations were shown to be in perfect agreement with Monte Carlo simulation results. Finally we studied force induced desorption of a polymer chain adsorbed on an attractive surface. We approached the problem within the framework of two different statistical ensembles; (i) by keeping the pulling force fixed while measuring the position of the polymer chain end, and (ii) by measuring the force necessary to keep the chain end at fixed distance above the adsorbing plane. In the first case we treated the problem within the framework of the Grand Canonical Ensemble approach and derived analytic expressions for the various conformational building blocks, characterizing the structure of an adsorbed linear polymer chain, subject to pulling force of fixed strength. The main result was the phase diagram of a polymer chain under pulling. We demonstrated a novel first order phase transformation which is dichotomic i.e. phase coexistence is not possible. In the second case, we carried out our study in the “fixed height” statistical ensemble where one measures the fluctuating force, exerted by the chain on the last monomer when a chain end is kept fixed at height h over the solid plane at different adsorption strength ε. The phase diagram in the h − ε plane was calculated both analytically and by Monte Carlo simulations. We demonstrated that in the vicinity of the polymer desorption transition a number of properties like fluctuations and probability distribution of various quantities behave differently, if h rather than the force, f, is used as an independent control parameter.
Resumo:
To assist rational compound design of organic semiconductors, two problems need to be addressed. First, the material morphology has to be known at an atomistic level. Second, with the morphology at hand, an appropriate charge transport model needs to be developed in order to link charge carrier mobility to structure.rnrnThe former can be addressed by generating atomistic morphologies using molecular dynamics simulations. However, the accessible range of time- and length-scales is limited. To overcome these limitations, systematic coarse-graining methods can be used. In the first part of the thesis, the Versatile Object-oriented Toolkit for Coarse-graining Applications is introduced, which provides a platform for the implementation of coarse-graining methods. Tools to perform Boltzmann inversion, iterative Boltzmann inversion, inverse Monte Carlo, and force-matching are available and have been tested on a set of model systems (water, methanol, propane and a single hexane chain). Advantages and problems of each specific method are discussed.rnrnIn partially disordered systems, the second issue is closely connected to constructing appropriate diabatic states between which charge transfer occurs. In the second part of the thesis, the description initially used for small conjugated molecules is extended to conjugated polymers. Here, charge transport is modeled by introducing conjugated segments on which charge carriers are localized. Inter-chain transport is then treated within a high temperature non-adiabatic Marcus theory while an adiabatic rate expression is used for intra-chain transport. The charge dynamics is simulated using the kinetic Monte Carlo method.rnrnThe entire framework is finally employed to establish a relation between the morphology and the charge mobility of the neutral and doped states of polypyrrole, a conjugated polymer. It is shown that for short oligomers, charge carrier mobility is insensitive to the orientational molecular ordering and is determined by the threshold transfer integral which connects percolating clusters of molecules that form interconnected networks. The value of this transfer integral can be related to the radial distribution function. Hence, charge mobility is mainly determined by the local molecular packing and is independent of the global morphology, at least in such a non-crystalline state of a polymer.
Resumo:
Top quark studies play an important role in the physics program of the Large Hadron Collider (LHC). The energy and luminosity reached allow the acquisition of a large amount of data especially in kinematic regions never studied before. In this thesis is presented the measurement of the ttbar production differential cross section on data collected by ATLAS in 2012 in proton proton collisions at \sqrt{s} = 8 TeV, corresponding to an integrated luminosity of 20.3 fb^{−1}. The measurement is performed for ttbar events in the semileptonic channel where the hadronically decaying top quark has a transverse momentum above 300 GeV. The hadronic top quark decay is reconstructed as a single large radius jet and identified using jet substructure properties. The final differential cross section result has been compared with several theoretical distributions obtaining a discrepancy of about the 25% between data and predictions, depending on the MC generator. Furthermore the kinematic distributions of the ttbar production process are very sensitive to the choice of the parton distribution function (PDF) set used in the simulations and could provide constraints on gluons PDF. In particular in this thesis is performed a systematic study on the PDF of the protons, varying several PDF sets and checking which one better describes the experimental distributions. The boosted techniques applied in this measurement will be fundamental in the next data taking at \sqrt{s}=13 TeV when will be produced a large amount of heavy particles with high momentum.
Resumo:
Inspired by the need for a representation of the biomass burning emissions injection height in the ECHAM/MESSy Atmospheric Chemistry model (EMAC)
Resumo:
Analysen zur molekularen Charakterisierung von Proteinen des humanen Usher-Syndroms und Evaluation genbasierter Therapiestrategien rnDas humane Usher Syndrom (USH) ist die häufigste Form vererbter Taub-Blindheit. In der vorliegenden Dissertation wurde diese komplexe Erkrankung auf verschiedenen Ebenen analysiert: in Arbeiten zur Expression und Lokalisation von USH-Proteinen, der Analyse der USH-Proteinnetzwerke und deren Funktionen sowie darauf aufbauend die Entwicklung von Therapiestrategien für USH.rnIm Rahmen der Arbeit wurde die Expression und (sub)-zelluläre Lokalisation des USH1D-Genproduktes CDH23 in der Retina und Cochlea analysiert. CDH23-Isoformen werden in der Maus zeitlich und räumlich differentiell exprimiert. In den Retinae von Mäusen, nicht humanen Primaten und Menschen zeigten Analysen eine unterschiedliche Expression und Lokalisation des Zell-Zelladhäsionsmoleküls CDH23, was auf Funktions-unterschiede der einzelnen Isoformen in den analysierten Spezies hindeutet.rnAnalysen zur Aufklärung der USH-Proteinnetzwerke ergaben eine potentielle Interaktion des USH1G-Gerüstproteins SANS mit dem Golgi- und Centrosom-assoziierten Protein Myomegalin. Die direkte Interaktion der Proteine konnte durch unabhängige Experimente verifiziert werden. Beide Interaktionspartner sind in den Retinae verschiedener Spezies partiell ko-lokalisiert und partizipieren im periciliären USH-Proteinnetzwerk. Die Assoziation von SANS und Myomegalin mit dem Mikrotubuli-Cytoskelett weist auf eine Funktion des Proteinkomplexes in gerichteten Transportprozessen innerhalb der Photorezeptoren hin und bekräftigt die Hypothese einer Rolle von SANS und assoziierten Netzwerken mit Transportprozessen.rnDas hier gewonnene erweiterte Verständnis der molekularen Grundlagen sowie die Aufklärung der zellulären Funktion der Proteinnetzwerke ermöglichen die Entwicklung therapeutischer Strategien für USH. Ein Fokus der vorliegenden Arbeit lag auf der Entwicklung genbasierter Therapiestrategien und deren Evaluation, wobei der Schwerpunkt auf der Therapiestrategie der Genreparatur lag. Die mit Hilfe von Zinkfinger-Nukleasen (ZFN) induzierte Homologe Rekombination für die Genkorrektur wurde exemplarisch an der 91C>T/p.R31X-Mutation im USH1C-Gen gezeigt. Effiziente ZFN wurden identifiziert, generiert und erfolgreich im Zellkulturmodellsystem eingesetzt. Die Analysen demonstrierten eine Reparatur der Mutation durch Homologe Rekombination auf genomischer Ebene und die Expression des wiederhergestellten Proteins. Durch die Genkorrektur im endogenen Lokus sind Größe des Gens, Isoformen oder die Art der Mutation keine limitierenden Faktoren für die Therapie. Die in der vorliegenden Arbeit durchgeführten Experimente unterstreichen das enorme Potential ZFN-basierter Therapiestrategien hin zu personalisierten Therapieformen nicht nur für USH sondern auch für andere erbliche Erkrankungen, deren genetische Grundlagen bekannt sind.rn
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
The European Association of Urology (EAU) guidelines on urinary incontinence published in March 2012 have been rewritten based on an independent systematic review carried out by the EAU guidelines panel using a sustainable methodology. OBJECTIVE: We present a short version here of the full guidelines on the surgical treatment of patients with urinary incontinence, with the aim of dissemination to a wider audience. EVIDENCE ACQUISITION: Evidence appraisal included a pragmatic review of existing systematic reviews and independent new literature searches based on Population, Intervention, Comparator, Outcome (PICO) questions. The appraisal of papers was carried out by an international panel of experts, who also collaborated in a series of consensus discussions, to develop concise structured evidence summaries and action-based recommendations using a modified Oxford system. EVIDENCE SUMMARY: The full version of the guidance is available online (www.uroweb.org/guidelines/online-guidelines/). The guidance includes algorithms that refer the reader back to the supporting evidence and have greater accessibility in daily clinical practice. Two original meta-analyses were carried out specifically for these guidelines and are included in this report. CONCLUSIONS: These new guidelines present an up-to-date summary of the available evidence, together with clear clinical algorithms and action-based recommendations based on the best available evidence. Where high-level evidence is lacking, they present a consensus of expert panel opinion.
Resumo:
Background: The recent development of semi-automated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. Methods: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. Results: We found that graphical representations can reveal substantial non-biological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. Conclusions: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.