31 resultados para Exakte Wissenschaften


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we develop further the functional renormalization group (RG) approach to quantum field theory (QFT) based on the effective average action (EAA) and on the exact flow equation that it satisfies. The EAA is a generalization of the standard effective action that interpolates smoothly between the bare action for krightarrowinfty and the standard effective action rnfor krightarrow0. In this way, the problem of performing the functional integral is converted into the problem of integrating the exact flow of the EAA from the UV to the IR. The EAA formalism deals naturally with several different aspects of a QFT. One aspect is related to the discovery of non-Gaussian fixed points of the RG flow that can be used to construct continuum limits. In particular, the EAA framework is a useful setting to search for Asymptotically Safe theories, i.e. theories valid up to arbitrarily high energies. A second aspect in which the EAA reveals its usefulness are non-perturbative calculations. In fact, the exact flow that it satisfies is a valuable starting point for devising new approximation schemes. In the first part of this thesis we review and extend the formalism, in particular we derive the exact RG flow equation for the EAA and the related hierarchy of coupled flow equations for the proper-vertices. We show how standard perturbation theory emerges as a particular way to iteratively solve the flow equation, if the starting point is the bare action. Next, we explore both technical and conceptual issues by means of three different applications of the formalism, to QED, to general non-linear sigma models (NLsigmaM) and to matter fields on curved spacetimes. In the main part of this thesis we construct the EAA for non-abelian gauge theories and for quantum Einstein gravity (QEG), using the background field method to implement the coarse-graining procedure in a gauge invariant way. We propose a new truncation scheme where the EAA is expanded in powers of the curvature or field strength. Crucial to the practical use of this expansion is the development of new techniques to manage functional traces such as the algorithm proposed in this thesis. This allows to project the flow of all terms in the EAA which are analytic in the fields. As an application we show how the low energy effective action for quantum gravity emerges as the result of integrating the RG flow. In any treatment of theories with local symmetries that introduces a reference scale, the question of preserving gauge invariance along the flow emerges as predominant. In the EAA framework this problem is dealt with the use of the background field formalism. This comes at the cost of enlarging the theory space where the EAA lives to the space of functionals of both fluctuation and background fields. In this thesis, we study how the identities dictated by the symmetries are modified by the introduction of the cutoff and we study so called bimetric truncations of the EAA that contain both fluctuation and background couplings. In particular, we confirm the existence of a non-Gaussian fixed point for QEG, that is at the heart of the Asymptotic Safety scenario in quantum gravity; in the enlarged bimetric theory space where the running of the cosmological constant and of Newton's constant is influenced by fluctuation couplings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with the car sequencing (CS) problem, a combinatorial optimization problem for sequencing mixed-model assembly lines. The aim is to find a production sequence for different variants of a common base product, such that work overload of the respective line operators is avoided or minimized. The variants are distinguished by certain options (e.g., sun roof yes/no) and, therefore, require different processing times at the stations of the line. CS introduces a so-called sequencing rule H:N for each option, which restricts the occurrence of this option to at most H in any N consecutive variants. It seeks for a sequence that leads to no or a minimum number of sequencing rule violations. In this work, CS’ suitability for workload-oriented sequencing is analyzed. Therefore, its solution quality is compared in experiments to the related mixed-model sequencing problem. A new sequencing rule generation approach as well as a new lower bound for the problem are presented. Different exact and heuristic solution methods for CS are developed and their efficiency is shown in experiments. Furthermore, CS is adjusted and applied to a resequencing problem with pull-off tables.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, the phenomenology of the Randall-Sundrum setup is investigated. In this context models with and without an enlarged SU(2)_L x SU(2)_R x U(1)_X x P_{LR} gauge symmetry, which removes corrections to the T parameter and to the Z b_L \bar b_L coupling, are compared with each other. The Kaluza-Klein decomposition is formulated within the mass basis, which allows for a clear understanding of various model-specific features. A complete discussion of tree-level flavor-changing effects is presented. Exact expressions for five dimensional propagators are derived, including Yukawa interactions that mediate flavor-off-diagonal transitions. The symmetry that reduces the corrections to the left-handed Z b \bar b coupling is analyzed in detail. In the literature, Randall-Sundrum models have been used to address the measured anomaly in the t \bar t forward-backward asymmetry. However, it will be shown that this is not possible within a natural approach to flavor. The rare decays t \to cZ and t \to ch are investigated, where in particular the latter could be observed at the LHC. A calculation of \Gamma_{12}^{B_s} in the presence of new physics is presented. It is shown that the Randall-Sundrum setup allows for an improved agreement with measurements of A_{SL}^s, S_{\psi\phi}, and \Delta\Gamma_s. For the first time, a complete one-loop calculation of all relevant Higgs-boson production and decay channels in the custodial Randall-Sundrum setup is performed, revealing a sensitivity to large new-physics scales at the LHC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The amyloid precursor protein (APP) is a type I transmembrane glycoprotein, which resembles a cell surface receptor, comprising a large ectodomain, a single spanning transmembrane part and a short C-terminal, cytoplasmic domain. It belongs to a conserved gene family, with over 17 members, including also the two mammalian APP homologues proteins APLP1 and APLP2 („amyloid precursor like proteins“). APP is encoded by 19 exons, of which exons 7, 8, and 15 can be alternatively spliced to produce three major protein isoforms APP770, APP751 and APP695, reflecting the number of amino acids. The neuronal APP695 is the only isoform that lacks a Kunitz Protease Inhibitor (KPI) domain in its extracellular portion whereas the two larger, peripheral APP isoforms, contain the 57-amino-acid KPI insert. rnRecently, research effort has suggested that APP metabolism and function is thought to be influenced by homodimerization and that the oligomerization state of APP could also play a role in the pathology of Alzheimer's disease (AD), by regulating its processing and amyloid beta production. Several independent studies have shown that APP can form homodimers within the cell, driven by motifs present in the extracellular domain, as well as in the juxtamembrane (JM) and transmembrane (TM) regions of the molecule, whereby the exact molecular mechanism and the origin of dimer formation remains elusive. Therefore, we focused in our study on the actual subcellular origin of APP homodimerization within the cell, an underlying mechanism, and a possible impact on dimerization properties of its homologue APLP1. Furthermore, we analyzed homodimerization of various APP isoforms, in particular APP695, APP751 and APP770, which differ in the presence of a Kunitz-type protease inhibitor domain (KPI) in the extracellular region. In order to assess the cellular origin of dimerization under different cellular conditions, we established a mammalian cell culture model-system in CHO-K1 (chinese hamster ovary) cells, stably overexpressing human APP, harboring dilysine based organelle sorting motifs at the very C-terminus [KKAA-Endoplasmic Reticulum (ER); KKFF-Golgi]. In this study we show that APP exists as disulfide-bound, SDS-stable dimers, when it was retained in the ER, unlike when it progressed further to the cis-Golgi, due to the KKFF ER exit determinant. These stable APP complexes were isolated from cells, and analyzed by SDS–polyacrylamide gel electrophoresis under non-reducing conditions, whereas strong denaturing and reducing conditions completely converted those dimers to monomers. Our findings suggested that APP homodimer formation starts early in the secretory pathway and that the unique oxidizing environment of the ER likely promotes intermolecular disulfide bond formation between APP molecules. We particularly visualized APP dimerization employing a variety of biochemical experiments and investigated the origin of its generation by using a Bimolecular Fluorescence Complementation (BiFC) approach with split GFP-APP chimeras. Moreover, using N-terminal deletion constructs, we demonstrate that intermolecular disulfide linkage between cysteine residues, exclusively located in the extracellular E1 domain, represents another mechanism of how an APP sub-fraction can dimerize within the cell. Additionally, mutational studies revealed that cysteines at positions 98 and 105, embedded in the conserved loop region within the E1 domain, are critical for interchain disulfide bond formation. Using a pharmacological treatment approach, we show that once generated in the oxidative environment of the ER, APP dimers remain stably associated during transport, reaching the plasma membrane. In addition, we demonstrate that APP isoforms, encompassing the KPI domain, exhibit a strongly reduced ability to form cis-directed dimers in the ER, whereas trans-directed cell aggregation of Drosophila Schneider (S2)-cells was isoform independent, mediating cell-cell contacts. Thus, suggesting that steric properties of KPI-APP might be the cause for weaker cis-interaction in the ER, compared to APP695. Finally, we provide evidence that APP/APLP1 heterointeractions are likewise initiated in the ER, suggesting a similar mechanism for heterodimerization. Therefore, dynamic alterations of APP between monomeric, homodimeric, and possibly heterodimeric status could at least partially explain some of the variety in the physiological functions of APP.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes the ultra-precise determination of the g-factor of the electron bound to hydrogenlike 28Si13+. The experiment is based on the simultaneous determination of the cyclotron- and Larmor frequency of a single ion, which is stored in a triple Penning-trap setup. The continuous Stern-Gerlach effect is used to couple the spin of the bound electron to the motional frequencies of the ion via a magnetic bottle, which allows the non-destructive determination of the spin state. To this end, a highly sensitive, cryogenic detection system was developed, which allowed the direct, non-destructive detection of the eigenfrequencies with the required precision.rnThe development of a novel, phase sensitive detection technique finally allowed the determination of the g-factor with a relative accuracy of 40 ppt, which was previously inconceivable. The comparison of the hereby determined value with the value predicted by quantumelectrodynamics (QED) allows the verification of the validity of this fundamental theory under the extreme conditions of the strong binding potential of a highly charged ion. The exact agreement of theory and experiment is an impressive demonstration of the exactness of QED. The experimental possibilities created in this work will allow in the near future not only further tests of theory, but also the determination of the mass of the electron with a precision that exceeds the current literature value by more than an order of magnitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Das menschliche Gen human giant larvae (hugl) ist ein Homolog des hochkonservierten Drosophila Gens lethal giant larvae (lgl), welches in Epithelzellen die Funktion eines neoplastischen Tumorsuppressors und Polaritätsregulators einnimmt. Ein Verlust oder eine verminderte Expression beider Homologe des Gens, hugl-1 und hugl-2, geht einher mit dem Auftreten und der Progression verschiedener epithelialer Tumorerkrankungen wie malignen Melanomen und Brust-, Kolon- oder Lungentumoren. Die exakte Funktion der Homologe Hugl-1 und Hugl-2 bezüglich der Regulation und Aufrechterhaltung der epithelialen Zellpolarität sowie ihre Rolle in der Genese humaner Tumore ist jedoch weitgehend unbekannt. Gänzlich unbekannt ist auch die Bedeutung von Hugl-1 und Hugl-2 als Polaritätsregulatoren für die Ausbildung und den Erhalt der T-Zellmorphologie und -funktion. Ziel der vorliegenden Arbeit war es daher, die Polaritäts- und Tumorsuppressorgene hugl-1 und hugl-2 in funktionellen Analysen mittels siRNA-vermitteltem Gen-Silencing in Epithelzellen und T-Lymphozyten zu charakterisieren. Darüber hinaus wurden die Funktionen und Eigenschaften von mgl-2, dem murinen Homologen von hugl-2, im Cre/loxP-vermittelten konditionalen Knockout Mausmodell in vivo analysiert.rnrnZur Charakterisierung der biologischen Effekte von Hugl-1 und Hugl-2 auf das Wachstumsverhalten, Migration und Invasion von Epithelzellen wurden in dieser Arbeit erfolgreich unterschiedliche shRNA-Expressionskonstrukte generiert sowie Hugl-supprimierte Zelllinien etabliert. In vitro Studien sowie in vivo Tumorigenizitätsanalysen lieferten übereinstimmend Hinweise darauf, dass verminderte Hugl-1- und Hugl-2-Expressionsspiegel eine signifikante Rolle in der Vermittlung invasiver und tumorigener Eigenschaften von Epithelzellen spielen. Dabei rief der Verlust beider Homologe deutlich stärkere Reaktionen hervor als die Suppression eines einzelnen Homologen. Zudem wiesen die Überexpression des Zellzyklusregulators Cyclin D1 sowie die Hyperproliferation von Hugl-1- und/oder Hugl-2-depletierten Epithelzellen auf eine wichtige Rolle der beiden Homologe in der Zellzyklusprogression und Zellproliferation hin. Ein geringer Expressionsstatus von Hugl-1 und -2 schien darüber hinaus mit einer verstärkten Resistenzbildung gegenüber Chemotherapeutika zu korrelieren. Im Rahmen dieser Arbeit konnte weiterhin gezeigt werden, dass die untersuchten T-Lymphozyten nur Hugl-1 exprimieren und dass letzteres notwendig für den F-Aktin-vermittelten Erhalt der T-Zellpolarität und -morphologie ist. Hugl-1-supprimierte, über voneinander unabhängige Signalwege (TCR- oder Chemokinrezeptor) stimulierte T-Lymphozyten wiesen eine bedeutende Störung der Lamellipodien- und Uropodausbildung auf und ließen eine Interaktion von Hugl-1 auf Ebene des F Aktins vermuten. Des Weiteren zeigte sich, dass der Polaritätsregulator Hugl-1 die CD3/TCR-induzierte Zelladhäsion positiv beeinflusst. Die Analyse der T-Zellmigration und -motilität offenbarte in Übereinstimmung dazu die Wichtigkeit von Hugl-1 für die Polarisierung und Migration der T-Zellen sowohl im Chemokingradienten als auch auf mDCs. rnrnFür die Aufklärung der funktionellen Rolle von mgl-2 in vivo wurde in dieser Arbeit eine Tamoxifen-induzierbare, Cre/loxP-vermittelte konditionale Mauslinie generiert und analysiert. Die mgl-2-deletierten Tiere wiesen weder signifikante phänotypische Unterschiede noch Abweichungen in der Organanatomie auf und ließen daher auf eine Kompensation durch das im Darmepithel koexprimierte und möglicherweise funktionell redundante mgl-1 Gen schließen.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory.rnAs its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained.rnThe constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point.rnFinally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of existing computations, taking the independent running of the Euler topological term into account. Known perturbative results are reproduced in this case from the renormalization group equation, identifying however a unique non-Gaussian fixed point.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we investigate the phenomenology of supersymmetric particles at hadron colliders beyond next-to-leading order (NLO) in perturbation theory. We discuss the foundations of Soft-Collinear Effective Theory (SCET) and, in particular, we explicitly construct the SCET Lagrangian for QCD. As an example, we discuss factorization and resummation for the Drell-Yan process in SCET. We use techniques from SCET to improve existing calculations of the production cross sections for slepton-pair production and top-squark-pair production at hadron colliders. As a first application, we implement soft-gluon resummation at next-to-next-to-next-to-leading logarithmic order (NNNLL) for slepton-pair production in the minimal supersymmetric extension of the Standard Model (MSSM). This approach resums large logarithmic corrections arising from the dynamical enhancement of the partonic threshold region caused by steeply falling parton luminosities. We evaluate the resummed invariant-mass distribution and total cross section for slepton-pair production at the Tevatron and LHC and we match these results, in the threshold region, onto NLO fixed-order calculations. As a second application we present the most precise predictions available for top-squark-pair production total cross sections at the LHC. These results are based on approximate NNLO formulas in fixed-order perturbation theory, which completely determine the coefficients multiplying the singular plus distributions. The analysis of the threshold region is carried out in pair invariant mass (PIM) kinematics and in single-particle inclusive (1PI) kinematics. We then match our results in the threshold region onto the exact fixed-order NLO results and perform a detailed numerical analysis of the total cross section.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oceans are key sources and sinks in the global budgets of significant atmospheric trace gases, termed Volatile Organic Compounds (VOCs). Despite their low concentrations, these species have an important role in the atmosphere, influencing ozone photochemistry and aerosol physics. Surprisingly, little work has been done on assessing their emissions or transport mechanisms and rates between ocean and atmosphere, all of which are important when modelling the atmosphere accurately.rnA new Needle Trap Device (NTD) - GC-MS method was developed for the effective sampling and analysis of VOCs in seawater. Good repeatability (RSDs <16 %), linearity (R2 = 0.96 - 0.99) and limits of detection in the range of pM were obtained for DMS, isoprene, benzene, toluene, p-xylene, (+)-α-pinene and (-)-α-pinene. Laboratory evaluation and subsequent field application indicated that the proposed method can be used successfully in place of the more usually applied extraction techniques (P&T, SPME) to extend the suite of species typically measured in the ocean and improve detection limits. rnDuring a mesocosm CO2 enrichment study, DMS, isoprene and α-pinene were identified and quantified in seawater samples, using the above mentioned method. Based on correlations with available biological datasets, the effects of ocean acidification as well as possible ocean biological sources were investigated for all examined compounds. Future ocean's acidity was shown to decrease oceanic DMS production, possibly impact isoprene emissions but not affect the production of α-pinene. rnIn a separate activity, ocean - atmosphere interactions were simulated in a large scale wind-wave canal facility, in order to investigate the gas exchange process and its controlling mechanisms. Air-water exchange rates of 14 chemical species (of which 11 VOCs) spanning a wide range of solubility (dimensionless solubility, α = 0:4 to 5470) and diffusivity (Schmidt number in water, Scw = 594 to 1194) were obtained under various turbulent (wind speed at ten meters height, u10 = 0:8 to 15ms-1) and surfactant modulated (two different sized Triton X-100 layers) surface conditions. Reliable and reproducible total gas transfer velocities were obtained and the derived values and trends were comparable to previous investigations. Through this study, a much better and more comprehensive understanding of the gas exchange process was accomplished. The role of friction velocity, uw* and mean square slope, σs2 in defining phenomena such as waves and wave breaking, near surface turbulence, bubbles and surface films was recognized as very significant. uw* was determined as the ideal turbulent parameter while σs2 described best the related surface conditions. A combination of both uw* and σs2 variables, was found to reproduce faithfully the air-water gas exchange process. rnA Total Transfer Velocity (TTV) model provided by a compilation of 14 tracers and a combination of both uw* and σs2 parameters, is proposed for the first time. Through the proposed TTV parameterization, a new physical perspective is presented which provides an accurate TTV for any tracer within the examined solubility range. rnThe development of such a comprehensive air-sea gas exchange parameterization represents a highly useful tool for regional and global models, providing accurate total transfer velocity estimations for any tracer and any sea-surface status, simplifying the calculation process and eliminating inevitable calculation uncertainty connected with the selection or combination of different parameterizations.rnrn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende kulturwissenschaftlich ausgerichtete Arbeit befasst sich mit der Konstruktion und Inszenierung von Realität, insbesondere im Verhältnis zu Körperdiskursen und Körperpraktiken in Makeover-Sendungen sowie mit deren komplexen politischen und kulturellen Implikationen. Die leitenden Fragen der vorliegenden Arbeit sind: Wie wird der Körper in den ausgewählten Formaten dargestellt? Wie wird er erzählt und mit welchen (Film-)Techniken in Szene gesetzt? Welche expliziten und impliziten Körperbilder liegen den Darstellungen zu Grunde? Welche kulturellen und politischen Normen werden über den Körper konstruiert? Welche Rolle spielt diese Konstruktion des Körpers für die Darstellung von „Realität“ in den entsprechenden Formaten? Was ist der größere gesellschaftliche Zusammenhang dieser Darstellung? rnTrotz der Vielzahl und Komplexität der Darstellungsweisen des Körpers soll anhand verschiedener repräsentativ ausgewählter Reality-TV-Formate eine Annäherung an die heterogene kulturelle sowie politische Tragweite derselben geleistet werden. Hierzu werden insbesondere Formate bevorzugt, die explizit die Transformation des Körpers thematisieren. Dies kann durch kosmetische, d.h. nicht-operative Veränderungen (Kleidung, Make-up, Frisur), Fitness und Ernährung bis hin zu medizinischen Eingriffen wie etwa plastischer Chirurgie erfolgen.rnAls erstes wird der Untersuchungsgegenstand genauer eingegrenzt, wobei sich zeigt, dass sich exakte Grenzziehungen aufgrund des schwer greifbaren Reality-TV-Genres und der transmedialen Eigenschaft des Makeover-Fensehtextes als Herausforderung erweisen. Danach werden die kulturwissenschaftlichen Annahmen, die dieser Arbeit zugrunde liegen, ausgeführt. Darüber hinaus wird das revolutionäre Forschungsfeld der Fat Studies eruiert und auch der körpertheoretische Zugang erläutert. Abschließend wird näher auf den filmnarratologischen Zugang, der den Analysen als theoretische, aber auch methodische Grundlage dient, eingegangen. Zudem wird behandelt wie der Körper in den behandelten Makeover-Formaten zunächst erzählbar gemacht und erzählt wird. Die Körper der Teilnehmer werden zunächst in einen ökonomischen Diskurs der Maße und Zahlen überführt, um im weiteren Verlauf die „Transformation“ auch diskursiv dramatisieren zu können. Die über die Ökonomisierung vorbereitete, zumeist als märchenhaft dargestellte, Verwandlung der Teilnehmer, kulminiert immer in der Gegenüberstellung des „Vorher“-Bildes und des „Nachher“-Bildes. Diese Ökonomisierung ist allerdings nur die narrative Grundlage einer viel umfassenderen, in den Sendungen meist implizit vermittelten Selbstregierungstechnologie, der „Gouvernementalität“, die kontrovers im Hinblick auf Vertreter einer Affekttheorie, die die Vermittlung der besagten Gouvernementalität in Zweifel ziehen, diskutiert wird. Die Kernthese der vorliegenden Arbeit, die den Körper als entscheidendes, die „Realität“ der Makeover-Formate affirmierendes Element versteht, ist ferner eng mit dem Begriff der „Authentizität“ verknüpft. „Authentische“ Effekte sind oft das selbsterklärte Ziel der Einsätze des Körpers im Reality-TV und äußern sich in unterschiedlichen Erzähltechniken wie den autobiografischen Elementen der Formate. Die bereits im Genre des Reality-TV angelegte Selbstreflexivität sowohl auf Inhalts- als auch auf Produktionsseite wird abschließend kontextualisiert und bewertet. Letztendlich stellt sich die Frage, welche kulturellen Widerstände und Spielarten trotz der sehr dogmatisch wirkenden Inhalte der Makeover-Serien erhalten bleiben.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

„Natürlich habe ich mich [...] unausgesetzt mit Mathematik beschäftigt, umso mehr als ich sie für meine erkenntnistheoretisch-philosophischen Studien brauchte, denn ohne Mathematik lässt sich kaum mehr philosophieren.“, schreibt Hermann Broch 1948, ein Schriftsteller, der ca. zehn Jahre zuvor von sich selbst sogar behauptete, das Mathematische sei eine seiner stärksten Begabungen.rnDiesem Hinweis, die Bedeutung der Mathematik für das Brochsche Werk näher zu untersuchen, wurde bis jetzt in der Forschung kaum Folge geleistet. Besonders in Bezug auf sein Spätwerk Die Schuldlosen fehlen solche Betrachtungen ganz, sie scheinen jedoch unentbehrlich für die Entschlüsselung dieses Romans zu sein, der oft zu Unrecht als Nebenarbeit abgewertet wurde, weil ihm „mit gängigen literaturwissenschaftlichen Kategorien […] nicht beizukommen ist“ (Koopmann, 1994). rnDa dieser Aspekt insbesondere mit Blick auf Die Schuldlosen ein Forschungsdesiderat darstellt, war das Ziel der vorliegenden Arbeit, Brochs mathematische Studien genauer nachzuvollziehen und vor diesem Hintergrund eine Neuperspektivierung der Schuldlosen zu leisten. Damit wird eine Grundlage geschaffen, die einen adäquaten Zugang zur Struktur dieses Romans eröffnet.rnDie vorliegende Arbeit ist in zwei Teile gegliedert. Nach einer Untersuchung von Brochs theoretischen Betrachtungen anhand ausgewählter Essays folgt die Interpretation der Schuldlosen aus diesem mathematischen Blickwinkel. Es wird deutlich, dass Brochs Poetik eng mit seinen mathematischen Anschauungen verquickt ist, und somit nachgewiesen, dass sich die spezielle Bauform des Romans wie auch seine besondere Form des Erzählens tatsächlich aus dem mathematischen Denken des Autors ableiten lassen. Broch nutzt insbesondere die mathematische Annäherung an das Unendliche für seine Versuche einer literarischen Erfassung der komplexen Wirklichkeit seiner Zeit. Dabei spielen nicht nur Elemente der fraktalen Geometrie eine zentrale Rolle, sondern auch Brochs eigener Hinweis, es handele sich „um eine Art Novellenroman“ (KW 13/1, 243). Denn tatsächlich ergibt sich aus den poetologischen Forderungen Brochs und ihren Umsetzungen im Roman die Gattung des Novellenromans, wie gezeigt wird. Dabei ist von besonderer Bedeutung, dass Broch dem Mythos eine ähnliche Rolle in der Literatur zuspricht wie der Mathematik in den Wissenschaften allgemein.rnMit seinem Roman Die Schuldlosen hat Hermann Broch Neuland betreten, indem er versuchte, durch seine mathematische Poetik die komplexe Wirklichkeit seiner Epoche abzubilden. Denn „die Ganzheit der Welt ist nicht erfaßbar, indem man deren Atome einzelweise einfängt, sondern nur, indem man deren Grundzüge und deren wesentliche – ja, man möchte sagen, deren mathematische Struktur aufzeigt“ (Broch).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of this thesis is to contribute to the development of new, exact solution approaches to different combinatorial optimization problems. In particular, we derive dedicated algorithms for a special class of Traveling Tournament Problems (TTPs), the Dial-A-Ride Problem (DARP), and the Vehicle Routing Problem with Time Windows and Temporal Synchronized Pickup and Delivery (VRPTWTSPD). Furthermore, we extend the concept of using dual-optimal inequalities for stabilized Column Generation (CG) and detail its application to improved CG algorithms for the cutting stock problem, the bin packing problem, the vertex coloring problem, and the bin packing problem with conflicts. In all approaches, we make use of some knowledge about the structure of the problem at hand to individualize and enhance existing algorithms. Specifically, we utilize knowledge about the input data (TTP), problem-specific constraints (DARP and VRPTWTSPD), and the dual solution space (stabilized CG). Extensive computational results proving the usefulness of the proposed methods are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.