11 resultados para Markov chains hidden Markov models Viterbi algorithm Forward-Backward algorithm maximum likelihood

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual Compton Scattering (VCS) is an important reaction for understanding nucleon structure at low energies. By studying this process, the generalized polarizabilities of the nucleon can be measured. These observables are a generalization of the already known polarizabilities and will permit theoretical models to be challenged on a new level. More specifically, there exist six generalized polarizabilities and in order to disentangle them all, a double polarization experiment must be performed. Within this work, the VCS reaction p(e,e p)gamma was measured at MAMI using the A1 Collaboration three spectrometer setup with Q2=0.33 (GeV/c)2. Using the highly polarized MAMI beam and a recoil proton polarimeter, it was possible to measure both the VCS cross section and the double polarization observables. Already in 2000, the unpolarized VCS cross section was measured at MAMI. In this new experiment, we could confirm the old data and furthermore the double polarization observables were measured for the first time. The data were taken in five periods between 2005 and 2006. In this work, the data were analyzed to extract the cross section and the proton polarization. For the analysis, a maximum likelihood algorithm was developed together with the full simulation of all the analysis steps. The experiment is limited by the low statistics due mainly to the focal plane proton polarimeter efficiency. To overcome this problem, a new determination and parameterization of the carbon analyzing power was performed. The main result of the experiment is the extraction of a new combination of the generalized polarizabilities using the double polarization observables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the phenomenology of the Randall-Sundrum setup is investigated. In this context models with and without an enlarged SU(2)_L x SU(2)_R x U(1)_X x P_{LR} gauge symmetry, which removes corrections to the T parameter and to the Z b_L \bar b_L coupling, are compared with each other. The Kaluza-Klein decomposition is formulated within the mass basis, which allows for a clear understanding of various model-specific features. A complete discussion of tree-level flavor-changing effects is presented. Exact expressions for five dimensional propagators are derived, including Yukawa interactions that mediate flavor-off-diagonal transitions. The symmetry that reduces the corrections to the left-handed Z b \bar b coupling is analyzed in detail. In the literature, Randall-Sundrum models have been used to address the measured anomaly in the t \bar t forward-backward asymmetry. However, it will be shown that this is not possible within a natural approach to flavor. The rare decays t \to cZ and t \to ch are investigated, where in particular the latter could be observed at the LHC. A calculation of \Gamma_{12}^{B_s} in the presence of new physics is presented. It is shown that the Randall-Sundrum setup allows for an improved agreement with measurements of A_{SL}^s, S_{\psi\phi}, and \Delta\Gamma_s. For the first time, a complete one-loop calculation of all relevant Higgs-boson production and decay channels in the custodial Randall-Sundrum setup is performed, revealing a sensitivity to large new-physics scales at the LHC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

rnThis thesis is on the flavor problem of Randall Sundrum modelsrnand their strongly coupled dual theories. These models are particularly wellrnmotivated extensions of the Standard Model, because they simultaneously address rntherngauge hierarchy problem and the hierarchies in the quarkrnmasses and mixings. In order to put this into context, special attention is given to concepts underlying therntheories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). ThernAdS/CFTrnduality is introduced and its implications for the Randall Sundrum model withrnfermions in the bulk andrngeneral bulk gauge groups is investigated. It will be shown that the differentrnterms in the general 5D propagator of a bulk gauge field can be related tornthe corresponding diagrams of the strongly coupled dual, which allows for arndeeperrnunderstanding of the origin of flavor changing neutral currents generated by thernexchange of the Kaluza Klein excitations of these bulk fields.rnIn the numerical analysis, different observables which are sensitive torncorrections from therntree-levelrnexchange of these resonances will be presented on the basis of updatedrnexperimental data from the Tevatron and LHC experiments. This includesrnelectroweak precision observables, namely corrections to the S and Trnparameters followed by corrections to the Zbb vertex, flavor changingrnobservables with flavor changes at one vertex, viz. BR (Bd -> mu+mu-) and BR (Bs -> mu+mu-), and two vertices,rn viz. S_psiphi and |eps_K|, as well as bounds from direct detectionrnexperiments. rnThe analysis will show that all of these bounds can be brought in agreement withrna new physics scale Lambda_NP in the TeV range, except for the CPrnviolating quantity |eps_K|, which requires Lambda_NP= Ord(10) TeVrnin the absencernof fine-tuning. The numerous modifications of the Randall Sundrum modelrnin the literature, which try to attenuate this bound are reviewed andrncategorized.rnrnSubsequently, a novel solution to this flavor problem, based on an extendedrncolor gauge group in the bulk and its thorough implementation inrnthe RS model, will be presented, as well as an analysis of the observablesrnmentioned above in the extended model. This solution is especially motivatedrnfromrnthe point of view of the strongly coupled dual theory and the implications forrnstrongly coupled models of new physics, which do not possess a holographic dual,rnare examined.rnFinally, the top quark plays a special role in models with a geometric explanation ofrnflavor hierarchies and the predictions in the Randall-Sundrum model with andrnwithout the proposed extension for the forward-backward asymmetryrnA_FB^trnin top pair production are computed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Auf einer drei Anbauperioden umfassenden Ground Truth Datenbasis wird der Informationsgehalt multitemporaler ERS-1/-2 Synthetic Aperture Radar (SAR) Daten zur Erfassung der Arteninventare und des Zustandes landwirtschaftlich genutzter Böden und Vegetation in Agrarregionen Bayerns evaluiert.Dazu wird ein für Radardaten angepaßtes, multitemporales, auf landwirtschaftlichen Schlägen beruhendes Klassifizierungsverfahren ausgearbeitet, das auf bildstatistischen Parametern der ERS-Zeitreihen beruht. Als überwachte Klassifizierungsverfahren wird vergleichend der Maximum-Likelihood-Klassifikator und ein Neuronales-Backpropagation-Netz eingesetzt. Die auf Radarbildkanälen beruhenden Gesamtgenauigkeiten variieren zwischen 75 und 85%. Darüber hinaus wird gezeigt, daß die interferometrische Kohärenz und die Kombination mit Bildkanälen optischer Sensoren (Landsat-TM, SPOT-PAN und IRS-1C-PAN) zur Verbesserung der Klassifizierung beitragen. Gleichermaßen können die Klassifizierungsergebnisse durch eine vorgeschaltete Grobsegmentierung des Untersuchungsgebietes in naturräumlich homogene Raumeinheiten verbessert werden. Über die Landnutzungsklassifizierung hinaus, werden weitere bio- und bodenphysikalische Parameter aus den SAR-Daten anhand von Regressionsmodellen abgeleitet. Im Mittelpunkt stehen die Paramter oberflächennahen Bodenfeuchte vegetationsfreier/-armer Flächen sowie die Biomasse landwirtschaftlicher Kulturen. Die Ergebnisse zeigen, daß mit ERS-1/-2 SAR-Daten eine Messung der Bodenfeuchte möglich ist, wenn Informationen zur Bodenrauhigkeit vorliegen. Hinsichtlich der biophysikalischen Parameter sind signifikante Zusammenhänge zwischen der Frisch- bzw. Trockenmasse des Vegetationsbestandes verschiedener Getreide und dem Radarsignal nachweisbar. Die Biomasse-Informationen können zur Korrektur von Wachstumsmodellen genutzt werden und dazu beitragen, die Genauigkeit von Ertragsschätzungen zu steigern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Phylogenie der Westpaläarktischen Langohren (Mammalia, Chiroptera, Plecotus) – eine molekulare Analyse Die Langohren stellen eine Fledermausgattung dar, die fast alle westpaläarktischen Habitate bist zum Polarkreis hin besiedeln und in vielerlei Hinsicht rätselhaft sind. In der Vergangenheit wurden zahlreiche Formen und Varietäten beschrieben. Trotzdem galt für lange Zeit, dass nur zwei Arten in Europa anerkannt wurden. Weitere Arten waren aus Nordafrika, den Kanaren und Asien bekannt, aber auch deren Artstatus wurde vielfach in Frage gestellt. In der vorliegenden Dissertation habe ich mittels molekularer Daten,der partiellen Sequenzierung mitochondrialer Gene (16S rRNA und ND1), sowie der mitochondrialen Kontrollregion, eine molekular Analyse der phylogenetischen Verwandtschaftsverhältnisse innerhalb und zwischen den Linien der westpaläarktischen Langohren durchgeführt. Die besten Substitutionsmodelle wurden berechnet und phylogenetische Bäume mit Hilfe vier verschiedener Methoden konstruiert: dem neighbor joining Verfahren (NJ), dem maximum likelihood Verfahren (ML), dem maximum parsimony Verfahren (MP) und dem Bayesian Verfahren. Sechs Linien der Langohren sind genetisch auf einem Artniveau differenziert: Plecotus auritus, P. austriacus, P. balensis, P. christii, P. sardus, und P. macrobullaris. Im Falle der Arten P. teneriffae, P. kolombatovici und P. begognae ist die alleinige Interpretation der genetischen Daten einzelner mitochondrialer Gene für eine Festlegung des taxonomischen Ranges nicht ausreichend. Ich beschreibe in dieser Dissertation drei neue Taxa: Plecotus sardus, P. kolombatovici gaisleri (=Plecotus teneriffae gaisleri, Benda et al. 2004) and P. macrobullaris alpinus [=Plecotus alpinus, Kiefer & Veith 2002). Morphologische Kennzeichen, insbesondere für die Erkennung im Feld, werden hier dargestellt. Drei der sieben Arten sind polytypisch: P. auritus (eine west- und ein osteuropäische Linie, eine sardische Linie und eine aktuell entdeckte kaukasische Linie, Plecotus kolombatovici (P. k. kolombatovici, P. k. gaisleri und P. k. ssp.) und P. macrobullaris (P. m. macrobullaris und P. m. alpinus). Die Verbreitungsgebiete der meisten Arten werden in dieser Arbeit erstmals ausschließlich anhand genetisch zugeordneter Tiere dargestellt.Die Untersuchung der ökologischen Einnischung der nun anerkannten Formen, insbesondere in Gebieten sympatrischer Verbreitung, bietet ein spannendes und lohnendes Feld für zukünftige Forschungen. Nicht zuletzt muss sich die Entdeckung eines beachtlichen Anteils kryptischer Diversität innerhalb der westpaläarktischen Langohren auch bei der Entwicklung spezieller Artenschutzkonzepte widerspiegeln.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we study the phenomenology of selected observables in the context of the Randall-Sundrum scenario of a compactified warpedrnextra dimension. Gauge and matter fields are assumed to live in the whole five-dimensional space-time, while the Higgs sector is rnlocalized on the infrared boundary. An effective four-dimensional description is obtained via Kaluza-Klein decomposition of the five dimensionalrnquantum fields. The symmetry breaking effects due to the Higgs sector are treated exactly, and the decomposition of the theory is performedrnin a covariant way. We develop a formalism, which allows for a straight-forward generalization to scenarios with an extended gauge group comparedrnto the Standard Model of elementary particle physics. As an application, we study the so-called custodial Randall-Sundrum model and compare the resultsrnto that of the original formulation. rnWe present predictions for electroweak precision observables, the Higgs production cross section at the LHC, the forward-backward asymmetryrnin top-antitop production at the Tevatron, as well as the width difference, the CP-violating phase, and the semileptonic CP asymmetry in B_s decays.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we investigate several phenomenologically important properties of top-quark pair production at hadron colliders. We calculate double differential cross sections in two different kinematical setups, pair invariant-mass (PIM) and single-particle inclusive (1PI) kinematics. In pair invariant-mass kinematics we are able to present results for the double differential cross section with respect to the invariant mass of the top-quark pair and the top-quark scattering angle. Working in the threshold region, where the pair invariant mass M is close to the partonic center-of-mass energy sqrt{hat{s}}, we are able to factorize the partonic cross section into different energy regions. We use renormalization-group (RG) methods to resum large threshold logarithms to next-to-next-to-leading-logarithmic (NNLL) accuracy. On a technical level this is done using effective field theories, such as heavy-quark effective theory (HQET) and soft-collinear effective theory (SCET). The same techniques are applied when working in 1PI kinematics, leading to a calculation of the double differential cross section with respect to transverse-momentum pT and the rapidity of the top quark. We restrict the phase-space such that only soft emission of gluons is possible, and perform a NNLL resummation of threshold logarithms. The obtained analytical expressions enable us to precisely predict several observables, and a substantial part of this thesis is devoted to their detailed phenomenological analysis. Matching our results in the threshold regions to the exact ones at next-to-leading order (NLO) in fixed-order perturbation theory, allows us to make predictions at NLO+NNLL order in RG-improved, and at approximate next-to-next-to-leading order (NNLO) in fixed order perturbation theory. We give numerical results for the invariant mass distribution of the top-quark pair, and for the top-quark transverse-momentum and rapidity spectrum. We predict the total cross section, separately for both kinematics. Using these results, we analyze subleading contributions to the total cross section in 1PI and PIM originating from power corrections to the leading terms in the threshold expansions, and compare them to previous approaches. We later combine our PIM and 1PI results for the total cross section, this way eliminating uncertainties due to these corrections. The combined predictions for the total cross section are presented as a function of the top-quark mass in the pole, the minimal-subtraction (MS), and the 1S mass scheme. In addition, we calculate the forward-backward (FB) asymmetry at the Tevatron in the laboratory, and in the ttbar rest frames as a function of the rapidity and the invariant mass of the top-quark pair at NLO+NNLL. We also give binned results for the asymmetry as a function of the invariant mass and the rapidity difference of the ttbar pair, and compare those to recent measurements. As a last application we calculate the charge asymmetry at the LHC as a function of a lower rapidity cut-off for the top and anti-top quarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the year 2013, the detection of a diffuse astrophysical neutrino flux with the IceCube neutrino telescope – constructed at the geographic South Pole – was announced by the IceCube collaboration. However, the origin of these neutrinos is still unknown as no sources have been identified to this day. Promising neutrino source candidates are blazars, which are a subclass of active galactic nuclei with radio jets pointing towards the Earth. In this thesis, the neutrino flux from blazars is tested with a maximum likelihood stacking approach, analyzing the combined emission from uniform groups of objects. The stacking enhances the sensitivity w.r.t. the still unsuccessful single source searches. The analysis utilizes four years of IceCube data including one year from the completed detector. As all results presented in this work are compatible with background, upper limits on the neutrino flux are given. It is shown that, under certain conditions, some hadronic blazar models can be challenged or even rejected. Moreover, the sensitivity of this analysis – and any other future IceCube point source search – was enhanced by the development of a new angular reconstruction method. It is based on a detailed simulation of the photon propagation in the Antarctic ice. The median resolution for muon tracks, induced by high-energy neutrinos, is improved for all neutrino energies above IceCube’s lower threshold at 0.1TeV. By reprocessing the detector data and simulation from the year 2010, it is shown that the new method improves IceCube’s discovery potential by 20% to 30% depending on the declination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.