939 resultados para gvSIG extensions


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Das Ziel dieser Arbeit bestand in der Untersuchung der Störungsverteilung und der Störungskinematik im Zusammenhang mit der Hebung der Riftschultern des Rwenzori Gebirges.rnDas Rwenzori Gebirge befindet sich im NNE-SSWbis N-S verlaufenden Albertine Rift, des nördlichsten Segments des westlichen Armes des Ostafrikanischen Grabensystems. Das Albertine Rift besteht aus Becken unterschiedlicher Höhe, die den Lake Albert, Lake Edward, Lake George und Lake Kivu enthalten. Der Rwenzori horst trennt die Becken des Lake Albert und des Lake Edward. Es erstreckt sich 120km in N-S Richtung, sowie 40-50km in E-W Richtung, der h¨ochste Punkt befindet sich 5111 ü. NN. Diese Studie untersucht einen Abschnitt des Rifts zwischen etwa 1°N und 0°30'S Breite sowie 29°30' und 30°30' östlicher Länge ersteckt. Auch die Feldarbeit konzentrierte sich auf dieses Gebiet.rnrnHauptzweck dieser Studie bestand darin, die folgende These auf ihre Richtigkeit zu überprüfen: ’Wenn es im Verlauf der Zeit tatsächlich zu wesentlichen Änderungen in der Störungskinematik kam, dann ist die starke Hebung der Riftflanken im Bereich der Rwenzoris nicht einfach durch Bewegung entlang der Graben-Hauptst¨orungen zu erklären. Vielmehr ist sie ein Resultat des Zusammenspiels mehrerer tektonische Prozesse, die das Spannungsfeld beeinflussen und dadurch Änderungen in der Kinematik hervorrufen.’ Dadurch konzentrierte sich die Studie in erster Linie auf die Störungsanalyse.rnrnDie Kenntnis regionaler Änderungen der Extensionsrichtung ist entscheidend für das Verständnis komplexer Riftsysteme wie dem Ostafrikanischen Graben. Daher bestand der Kern der Untersuchung in der Kartierung von Störungen und der Untersuchung der Störungskinematik. Die Aufnahme strukturgeologischer Daten konzentrierte sich auf die Ugandische Seite des Rifts, und Pal¨aospannungen wurden mit Hilfe von St¨orungsdaten durch Spannungsinversion rekonstruiert.rnDie unterschiedliche Orientierung spr¨oder Strukturen im Gelände, die geometrische Analyse der geologischen Strukturen sowie die Ergebnisse von Mikrostrukturen im Dünnschliff (Kapitel 4) weisen auf verschiedene Spannungsfelder hin, die auf mögliche Änderungen der Extensionsrichtung hinweisen. Die Resultate der Spannungsinversion sprechen für Ab-, Über- und Blattverschiebungen sowie für Schrägüberschiebungen (Kapitel 5). Aus der Orientierung der Abschiebungen gehen zwei verschiedene Extensionsrichtungen hervor: im Wesentlichen NW-SE Extension in fast allen Gebieten, sowie NNE-SSW Extension im östlichen Zentralbereich.rnAus der Analyse von Blattverschiebungen ergaben sich drei unterschiedliche Spannungszustände. Zum Einen NNW-SSE bis N-S Kompression in Verbindung mit ENE-WSW bzw E-W Extension wurde für die nördlichen und die zentralen Ruwenzoris ausgemacht. Ein zweiter Spannungszustand mit WNW-ESE Kompression/NNE-SSW Extension betraf die Zentralen Rwenzoris. Ein dritter Spannungszustand mit NNW-SSE Extension betraf den östlichen Zentralteil der Rwenzoris. Schrägüberschiebungen sind durch dazu schräge Achsen charakterisiert, die für N-S bis NNW-SSE Kompression sprechen und ausschließlich im östlichen Zentralabschnitt auftreten. Überschiebungen, die hauptsächlich in den zentralen und den östlichen Rwenzoris auftreten, sprechen für NE-SW orientierten σ2-Achsen und NW-SE Extension.rnrnEs konnten drei unterschiedliche Spannungseinflüsse identifiziert werden: auf die kollisionsbedingte Bildung eines Überschiebungssystem folgte intra-kratonische Kompression und schließlich extensionskontrollierte Riftbildung. Der Übergang zwischen den beiden letztgenannten Spannungszuständen erfolgte Schrittweise und erzeugte vermutlich lokal begrenzte Transpression und Transtension. Gegenw¨artig wird die Störungskinematik der Region durch ein tensiles Spannungsregime in NW-SE bis N-S Richtung bestimmt.rnrnLokale Spannungsvariationen werden dabei hauptsächlich durch die Interferenzrndes regionalen Spannungsfeldes mit lokalen Hauptst¨orungen verursacht. Weitere Faktoren die zu lokalen Veränderungen des Spannungsfeldes führen können sind unterschiedliche Hebungsgeschwindigkeiten, Blockrotation oder die Interaktion von Riftsegmenten. Um den Einfluß präexistenter Strukturen und anderer Bedingungen auf die Hebung der Rwenzoris zu ermitteln, wurde der Riftprozeß mit Hilfe eines analogen ’Sandbox’-Modells rekonstruiert (Kapitel 6). Da sich die Moho-Diskontinuität im Bereich des Arbeitsgebietes in einer Tiefe von 25 km befindet, aktive Störungen aber nur bis zu einer Tiefe von etwa 20 km beobachtet werden können (Koehn et al. 2008), wurden nur die oberen 25 km im Modell nachbebildet. Untersucht und mit Geländebeobachtungen verglichen wurden sowohl die Reihenfolge, in der Riftsegmente entstehen, als auch die Muster, die sich im Verlauf der Nukleierung und des Wachstums dieser Riftsegmente ausbilden. Das Hauptaugenmerk wurde auf die Entwicklung der beiden Subsegmente gelegt auf denen sich der Lake Albert bzw. der Lake Edward und der Lake George befinden, sowie auf das dazwischenliegende Rwenzori Gebirge. Das Ziel der Untersuchung bestand darin herauszufinden, in welcher Weise das südwärts propagierende Lake Albert-Subsegment mit dem sinistral versetzten nordwärts propagierenden Lake Edward/Lake George-Subsegment interagiert.rnrnVon besonderem Interesse war es, in welcherWeise die Strukturen innerhalb und außerhalb der Rwenzoris durch die Interaktion dieser Riftsegmente beeinflußt wurden. rnrnDrei verschiedene Versuchsreihen mit unterschiedlichen Randbedingungen wurden miteinander verglichen. Abhängig vom vorherrschenden Deformationstyp der Transferzone wurden die Reihen als ’Scherungs-dominiert’, ’Extensions-dominiert’ und als ’Rotations-dominiert’ charakterisiert. Die Beobachtung der 3-dimensionalen strukturellen Entwicklung der Riftsegmente wurde durch die Kombination von Modell-Aufsichten mit Profilschnitten ermöglicht. Von den drei genannten Versuchsreihen entwickelte die ’Rotationsdominierten’ Reihe einen rautenförmiger Block im Tranferbereich der beiden Riftsegmente, der sich um 5−20° im Uhrzeigersinn drehte. DieserWinkel liegt im Bereich des vermuteten Rotationswinkel des Rwenzori-Blocks (5°). Zusammengefasst untersuchen die Sandbox-Versuche den Einfluss präexistenter Strukturen und der Überlappung bzw. Überschneidung zweier interagierender Riftsegmente auf die Entwicklung des Riftsystems. Sie befassen sich darüber hinaus mit der Frage, welchen Einfluss Blockbildung und -rotation auf das lokale Stressfeld haben.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oligodendrocytes form specialized plasma membrane extensions which spirally enwrap axons, thereby building up the myelin sheath. During myelination, oligodendrocytes produce large amounts of membrane components. Oligodendrocytes can be seen as a complex polarized cell type with two distinct membrane domains, the plasma membrane surrounding the cell body and the myelin membrane. SNARE proteins mediate the fusion of vesicular cargoes with their target membrane. We propose a model in which the major myelin protein PLP is transported by two different pathways. VAMP3 mediates the non-polarized transport of newly synthesized PLP via recycling endosomes to the plasma membrane, while transport of PLP from late endosomes/lysosomes to myelin is controlled by VAMP7. In the second part of the thesis, the role of exosome secretion in glia to axon signaling was studied. Further studies are required to clarify whether VAMP7 also controls exosome secretion. The thesis further focused on putative metabolic effects in the target neurons. Oligodendroglial exosomes showed no obvious influences on neuronal metabolic activity. Analysis of the phosphorylation levels of the neurofilament heavy subunit revealed a decrease in presence of oligodendrocytes, indicating effects of oligodendroglial exosomes on the neuronal cytoskeleton. Finally, candidates for kinases which are possibly activated upon influence of oligodendroglial exosomes and could influence neuronal survival were identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

rnThis thesis is on the flavor problem of Randall Sundrum modelsrnand their strongly coupled dual theories. These models are particularly wellrnmotivated extensions of the Standard Model, because they simultaneously address rntherngauge hierarchy problem and the hierarchies in the quarkrnmasses and mixings. In order to put this into context, special attention is given to concepts underlying therntheories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). ThernAdS/CFTrnduality is introduced and its implications for the Randall Sundrum model withrnfermions in the bulk andrngeneral bulk gauge groups is investigated. It will be shown that the differentrnterms in the general 5D propagator of a bulk gauge field can be related tornthe corresponding diagrams of the strongly coupled dual, which allows for arndeeperrnunderstanding of the origin of flavor changing neutral currents generated by thernexchange of the Kaluza Klein excitations of these bulk fields.rnIn the numerical analysis, different observables which are sensitive torncorrections from therntree-levelrnexchange of these resonances will be presented on the basis of updatedrnexperimental data from the Tevatron and LHC experiments. This includesrnelectroweak precision observables, namely corrections to the S and Trnparameters followed by corrections to the Zbb vertex, flavor changingrnobservables with flavor changes at one vertex, viz. BR (Bd -> mu+mu-) and BR (Bs -> mu+mu-), and two vertices,rn viz. S_psiphi and |eps_K|, as well as bounds from direct detectionrnexperiments. rnThe analysis will show that all of these bounds can be brought in agreement withrna new physics scale Lambda_NP in the TeV range, except for the CPrnviolating quantity |eps_K|, which requires Lambda_NP= Ord(10) TeVrnin the absencernof fine-tuning. The numerous modifications of the Randall Sundrum modelrnin the literature, which try to attenuate this bound are reviewed andrncategorized.rnrnSubsequently, a novel solution to this flavor problem, based on an extendedrncolor gauge group in the bulk and its thorough implementation inrnthe RS model, will be presented, as well as an analysis of the observablesrnmentioned above in the extended model. This solution is especially motivatedrnfromrnthe point of view of the strongly coupled dual theory and the implications forrnstrongly coupled models of new physics, which do not possess a holographic dual,rnare examined.rnFinally, the top quark plays a special role in models with a geometric explanation ofrnflavor hierarchies and the predictions in the Randall-Sundrum model with andrnwithout the proposed extension for the forward-backward asymmetryrnA_FB^trnin top pair production are computed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Standard Model of particle physics was developed to describe the fundamental particles, which form matter, and their interactions via the strong, electromagnetic and weak force. Although most measurements are described with high accuracy, some observations indicate that the Standard Model is incomplete. Numerous extensions were developed to solve these limitations. Several of these extensions predict heavy resonances, so-called Z' bosons, that can decay into an electron positron pair. The particle accelerator Large Hadron Collider (LHC) at CERN in Switzerland was built to collide protons at unprecedented center-of-mass energies, namely 7 TeV in 2011. With the data set recorded in 2011 by the ATLAS detector, a large multi-purpose detector located at the LHC, the electron positron pair mass spectrum was measured up to high masses in the TeV range. The properties of electrons and the probability that other particles are mis-identified as electrons were studied in detail. Using the obtained information, a sophisticated Standard Model expectation was derived with data-driven methods and Monte Carlo simulations. In the comparison of the measurement with the expectation, no significant deviations from the Standard Model expectations were observed. Therefore exclusion limits for several Standard Model extensions were calculated. For example, Sequential Standard Model (SSM) Z' bosons with masses below 2.10 TeV were excluded with 95% Confidence Level (C.L.).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A permanent electric dipole moment of the neutron violates time reversal as well as parity symmetry. Thus it also violates the combination of charge conjugation and parity symmetry if the combination of all three symmetries is a symmetry of nature. The violation of these symmetries could help to explain the observed baryon content of the Universe. The prediction of the Standard Model of particle physics for the neutron electric dipole moment is only about 10e−32 ecm. At the same time the combined violation of charge conjugation and parity symmetry in the Standard Model is insufficient to explain the observed baryon asymmetry of the Universe. Several extensions to the Standard Model can explain the observed baryon asymmetry and also predict values for the neutron electric dipole moment just below the current best experimental limit of d n < 2.9e−26 ecm, (90% C.L.) that has been obtained by the Sussex-RAL-ILL collaboration in 2006. The very same experiment that set the current best limit on the electric dipole moment has been upgraded and moved to the Paul Scherrer Institute. Now an international collaboration is aiming at increasing the sensitivity for an electric dipole moment by more than an order of magnitude. This thesis took place in the frame of this experiment and went along with the commissioning of the experiment until first data taking. After a short layout of the theoretical background in chapter 1, the experiment with all subsystems and their performance are described in detail in chapter 2. To reach the goal sensitivity the control of systematic errors is as important as an increase in statistical sensitivity. Known systematic efects are described and evaluated in chapter 3. During about ten days in 2012, a first set of data was measured with the experiment at the Paul Scherrer Institute. An analysis of this data is presented in chapter 4, together with general tools developed for future analysis eforts. The result for the upper limit of an electric dipole moment of the neutron is |dn| ≤ 6.4e−25 ecm (95%C.L.). Chapter 5 presents investigations for a next generation experiment, to build electrodes made partly from insulating material. Among other advantages, such electrodes would reduce magnetic noise, generated by the thermal movement of charge carriers. The last Chapter summarizes this work and gives an outlook.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene nanoribbons (GNRs), which are defined as nanometer-wide strips of graphene, are attracting an increasing attention as one on the most promising materials for future nanoelectronics. Unlike zero-bandgap graphene that cannot be switched off in transistors, GNRs possess open bandgaps that critically depend on their width and edge structures. GNRs were predominantly prepared through “top-down” methods such as “cutting” of graphene and “unzipping” of carbon nanotubes, but these methods cannot precisely control the structure of the resulting GNRs. In contrast, “bottom-up” chemical synthetic approach enables fabrication of structurally defined and uniform GNRs from tailor-made polyphenylene precursors. Nevertheless, width and length of the GNRs obtainable by this method were considerably limited. In this study, lateral as well as longitudinal extensions of the GNRs were achieved while preserving the high structural definition, based on the bottom-up solution synthesis. Initially, wider (~2 nm) GNRs were synthesized by using laterally expanded monomers through AA-type Yamamoto polymerization, which proved more efficient than the conventional A2B2-type Suzuki polymerization. The wider GNRs showed broad absorption profile extending to the near-infrared region with a low optical bandgap of 1.12 eV, which indicated a potential of such GNRs for the application in photovoltaic cells. Next, high longitudinal extension of narrow (~1 nm) GNRs over 600 nm was accomplished based on AB-type Diels–Alder polymerization, which provided corresponding polyphenylene precursors with the weight-average molecular weight of larger than 600,000 g/mol. Bulky alkyl chains densely installed on the peripheral positions of these GNRs enhanced their liquid-phase processability, which allowed their formation of highly ordered self-assembled monolayers. Furthermore, non-contact time-resolved terahertz spectroscopy measurements demonstrated high charge-carrier mobility within individual GNRs. Remarkably, lateral extension of the AB-type monomer enabled the fabrication of wider (~2 nm) and long (>100 nm) GNRs through the Diels–Alder polymerization. Such longitudinally extended and structurally well-defined GNRs are expected to allow the fabrication of single-ribbon transistors for the fundamental studies on the electronic properties of the GNRs as well as contribute to the development of future electronic devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Physiologic data display is essential to decision making in critical care. Current displays echo first-generation hemodynamic monitors dating to the 1970s and have not kept pace with new insights into physiology or the needs of clinicians who must make progressively more complex decisions about their patients. The effectiveness of any redesign must be tested before deployment. Tools that compare current displays with novel presentations of processed physiologic data are required. Regenerating conventional physiologic displays from archived physiologic data is an essential first step. OBJECTIVES: The purposes of the study were to (1) describe the SSSI (single sensor single indicator) paradigm that is currently used for physiologic signal displays, (2) identify and discuss possible extensions and enhancements of the SSSI paradigm, and (3) develop a general approach and a software prototype to construct such "extended SSSI displays" from raw data. RESULTS: We present Multi Wave Animator (MWA) framework-a set of open source MATLAB (MathWorks, Inc., Natick, MA, USA) scripts aimed to create dynamic visualizations (eg, video files in AVI format) of patient vital signs recorded from bedside (intensive care unit or operating room) monitors. Multi Wave Animator creates animations in which vital signs are displayed to mimic their appearance on current bedside monitors. The source code of MWA is freely available online together with a detailed tutorial and sample data sets.