902 resultados para Eigenvalue Bounds
Resumo:
In the present thesis, we discuss the main notions of an axiomatic approach for an invariant Harnack inequality. This procedure, originated from techniques for fully nonlinear elliptic operators, has been developed by Di Fazio, Gutiérrez, and Lanconelli in the general settings of doubling Hölder quasi-metric spaces. The main tools of the approach are the so-called double ball property and critical density property: the validity of these properties implies an invariant Harnack inequality. We are mainly interested in the horizontally elliptic operators, i.e. some second order linear degenerate-elliptic operators which are elliptic with respect to the horizontal directions of a Carnot group. An invariant Harnack inequality of Krylov-Safonov type is still an open problem in this context. In the thesis we show how the double ball property is related to the solvability of a kind of exterior Dirichlet problem for these operators. More precisely, it is a consequence of the existence of some suitable interior barrier functions of Bouligand-type. By following these ideas, we prove the double ball property for a generic step two Carnot group. Regarding the critical density, we generalize to the setting of H-type groups some arguments by Gutiérrez and Tournier for the Heisenberg group. We recognize that the critical density holds true in these peculiar contexts by assuming a Cordes-Landis type condition for the coefficient matrix of the operator. By the axiomatic approach, we thus prove an invariant Harnack inequality in H-type groups which is uniform in the class of the coefficient matrices with prescribed bounds for the eigenvalues and satisfying such a Cordes-Landis condition.
Resumo:
Zusammenfassungrn Der Faltungsprozess des Hauptlichtsammelkomplexes des Photosystems II aus höheren Pflanzen (light harvesting complex II, LHCII) wurde bereits mehrfach untersucht, die Experimente hierzu fanden stets im Ensemble statt. Anhand der bislang veröffentlichten Faltungskinetiken des LHCII aus höheren Pflanzen lassen sich aber keine eindeutigen Aussagen bezüglich der Diversität der Faltungswege treffen. Daher sollten im Rahmen dieser Arbeit Faltungskinetiken einzelner LHCII-Moleküle während der Komplexbildung aufgenommen werden, um weitergehende Informationen zum Faltungsmechanismus zu erhalten und zur Frage, ob hier mehrere unterschiedliche Wege eingeschlagen werden.rnHierfür war zunächst die Etablierung einer Oberflächenimmobilisierung mit Glas als Trägermaterial notwendig. Nachdem Versuche, diese Immobilisierung über einen His6-tag oder über einen heterobifunktionellen Linker zu bewerkstelligen, nicht zum Erfolg geführt haben, konnte eine Immobilisierung des Biotin-markierten Proteins an Oberflächen-gebundenes Avidin erreicht werden. Die Qualität dieser Immobilisierung wurde hierbei sowohl über Bindungsversuche mit fluoreszenzfarbstoffmarkiertem Protein als auch über eine direkte Kontrolle der Oberflächenbeschaffenheit mittels Rasterkraftmikroskopie überprüft. Die für die folgenden Versuche optimale Belegungsdichte wurde im konfokalen Fluoreszenzmikroskop ermittelt. Zudem wurde sichergestellt, dass die Proteine vereinzelt auf der Oberfläche immobilisiert vorliegen.rnAuf dieser Basis wurden LHCII-Komplexe, die zuvor in vitro rekonstituiert wurden, immobilisiert und Versuche zur kontrollierten Denaturierung unternommen, um Zerfalls-kinetiken im Verfahren der internen Totalreflexionsfluoreszenzmikroskopie (total internal reflection fluorescence, TIRF) aufnehmen zu können. Hierbei traten Schwierigkeiten bezüglich der Lebensdauer der Komplexe unter Laser-Belichtung auf, da sich die Löschung der Fluoreszenz durch Zerstrahlung der Pigmente einerseits oder Dissoziation der LHCII andererseits nicht unterscheiden ließen. Auch durch verschiedene Maßnahmen zur Erhöhung der Lebensdauer konnte diese nicht in dem Maße gesteigert werden, wie es experimentell notwendig gewesen wäre.rnFür das eigentliche Hauptziel dieser Arbeit – die Aufzeichnung von Einzelmolekül-Faltungskinetiken – war die Entwicklung einer Methode zur Rekonstitution oberflächen-immobilisierter LHCII-Apoproteine notwendig. Dieses Ziel wurde mithilfe einer Detergenzmisch-Rekonstitution erreicht. Der Erfolg der Rekonstitution konnte experimentell sowohl im Fluorimeter anhand des komplexinternen Energietransfers auf einen kovalent an das Protein gebundenen Infrarot-Fluorophor als auch im TIRF-Verfahren direkt beobachtet werden. Auch hier konnte nach ca. 80 Sekunden ein Ausbleichen der Komplexe während der Belichtung durch den Anregungs-Laser beobachtet werden.rnIn Versuchen zur Beobachtung des Komplexbildungsvorganges zeigte sich, dass die Rekonstitution offenbar durch die Belichtung massiv gestört wird. Ein weiteres Problem war eine sehr starke Hintergrundfluoreszenz, ausgelöst durch die zur Rekonstitution notwendige Pigmentlösung, die trotz der TIRF-Anregung von ausschließlich oberflächengebundenem Material die Fluoreszenz der Komplexe überlagerte. Somit konnte die Rekonstitution oberflächenimmobilisierter LHCII-Proteine zwar in Vorher-Nachher-Aufnahmen gezeigt werden, der Faltungsprozess an sich konnte dagegen im Rahmen dieser Arbeit nicht aufgezeichnet werden.
Resumo:
In dieser Arbeit werden wir ein Modell untersuchen, welches die Ausbreitung einer Infektion beschreibt. Bei diesem Modell werden zunächst Partikel gemäß eines Poissonschen Punktprozesses auf der reellen Achse verteilt. Bis zu einem gewissen Punkt auf der reellen Achse sind alle Partikel von einer Infektion befallen. Während sich nicht infizierte Partikel nicht bewegen, folgen die infizierten Partikel den Pfaden von voneinander unabhängigen Brownschen Bewegungen und verbreitet die Infektion dabei an den Orten, welche sie betreten. Wenn sie dabei auf ein nicht infiziertes Partikel treffen, ist dieses von diesem Moment an auch infiziert und beginnt ebenfalls, dem Pfad einer Brownschen Bewegung zu folgen und die Infektion auszubreiten. Auf diese Art verschiebt sich nun der am weitesten rechts liegende Ort R_t, an dem die Infektion bereits verbreitet wurde. Wir werden mit Hilfe des subadditiven Ergodensatzes zeigen, dass sich dieser Ort mit linearer Geschwindigkeit fortbewegt. Ferner werden wir eine obere und eine untere Schranke für die Ausbreitungsgeschwindkeit angeben. Danach werden wir zeigen, dass der Prozess Regenerationszeiten hat, nämlich solche zufällige Zeiten, zu denen er eine Art Neustart unter speziellen Startbedingungen durchführt. Wir werden diese für eine weitere Charakterisierung der Ausbreitungsgeschwingkeit nutzen. Ferner erhalten wir durch die Regenerationszeiten auch einen Zentralen Grenzwertsatz für R_t und können zeigen, dass die Verteilung der infizierten Partikel aus Sicht des am weitesten rechts liegenden infizierten Ortes gegen eine invariante Verteilung konvergiert.
Resumo:
Nitrogen is an essential nutrient. It is for human, animal and plants a constituent element of proteins and nucleic acids. Although the majority of the Earth’s atmosphere consists of elemental nitrogen (N2, 78 %) only a few microorganisms can use it directly. To be useful for higher plants and animals elemental nitrogen must be converted to a reactive oxidized form. This conversion happens within the nitrogen cycle by free-living microorganisms, symbiotic living Rhizobium bacteria or by lightning. Humans are able to synthesize reactive nitrogen through the Haber-Bosch process since the beginning of the 20th century. As a result food security of the world population could be improved noticeably. On the other side the increased nitrogen input results in acidification and eutrophication of ecosystems and in loss of biodiversity. Negative health effects arose for humans such as fine particulate matter and summer smog. Furthermore, reactive nitrogen plays a decisive role at atmospheric chemistry and global cycles of pollutants and nutritive substances.rnNitrogen monoxide (NO) and nitrogen dioxide (NO2) belong to the reactive trace gases and are grouped under the generic term NOx. They are important components of atmospheric oxidative processes and influence the lifetime of various less reactive greenhouse gases. NO and NO2 are generated amongst others at combustion process by oxidation of atmospheric nitrogen as well as by biological processes within soil. In atmosphere NO is converted very quickly into NO2. NO2 is than oxidized to nitrate (NO3-) and to nitric acid (HNO3), which bounds to aerosol particles. The bounded nitrate is finally washed out from atmosphere by dry and wet deposition. Catalytic reactions of NOx are an important part of atmospheric chemistry forming or decomposing tropospheric ozone (O3). In atmosphere NO, NO2 and O3 are in photosta¬tionary equilibrium, therefore it is referred as NO-NO2-O3 triad. At regions with elevated NO concentrations reactions with air pollutions can form NO2, altering equilibrium of ozone formation.rnThe essential nutrient nitrogen is taken up by plants mainly by dissolved NO3- entering the roots. Atmospheric nitrogen is oxidized to NO3- within soil via bacteria by nitrogen fixation or ammonium formation and nitrification. Additionally atmospheric NO2 uptake occurs directly by stomata. Inside the apoplast NO2 is disproportionated to nitrate and nitrite (NO2-), which can enter the plant metabolic processes. The enzymes nitrate and nitrite reductase convert nitrate and nitrite to ammonium (NH4+). NO2 gas exchange is controlled by pressure gradients inside the leaves, the stomatal aperture and leaf resistances. Plant stomatal regulation is affected by climate factors like light intensity, temperature and water vapor pressure deficit. rnThis thesis wants to contribute to the comprehension of the effects of vegetation in the atmospheric NO2 cycle and to discuss the NO2 compensation point concentration (mcomp,NO2). Therefore, NO2 exchange between the atmosphere and spruce (Picea abies) on leaf level was detected by a dynamic plant chamber system under labo¬ratory and field conditions. Measurements took place during the EGER project (June-July 2008). Additionally NO2 data collected during the ECHO project (July 2003) on oak (Quercus robur) were analyzed. The used measuring system allowed simultaneously determina¬tion of NO, NO2, O3, CO2 and H2O exchange rates. Calculations of NO, NO2 and O3 fluxes based on generally small differences (∆mi) measured between inlet and outlet of the chamber. Consequently a high accuracy and specificity of the analyzer is necessary. To achieve these requirements a highly specific NO/NO2 analyzer was used and the whole measurement system was optimized to an enduring measurement precision.rnData analysis resulted in a significant mcomp,NO2 only if statistical significance of ∆mi was detected. Consequently, significance of ∆mi was used as a data quality criterion. Photo-chemical reactions of the NO-NO2-O3 triad in the dynamic plant chamber’s volume must be considered for the determination of NO, NO2, O3 exchange rates, other¬wise deposition velocity (vdep,NO2) and mcomp,NO2 will be overestimated. No significant mcomp,NO2 for spruce could be determined under laboratory conditions, but under field conditions mcomp,NO2 could be identified between 0.17 and 0.65 ppb and vdep,NO2 between 0.07 and 0.42 mm s-1. Analyzing field data of oak, no NO2 compensation point concentration could be determined, vdep,NO2 ranged between 0.6 and 2.71 mm s-1. There is increasing indication that forests are mainly a sink for NO2 and potential NO2 emissions are low. Only when assuming high NO soil emissions, more NO2 can be formed by reaction with O3 than plants are able to take up. Under these circumstance forests can be a source for NO2.
Resumo:
L’effettività della tutela cautelare, intesa come tutela tempestiva, adeguata e piena, è stata la linea cardine dell’evoluzione della giustizia amministrativa, che, nel corso di un periodo durato più di un secolo, grazie all’opera della giurisprudenza e della dottrina, si è strutturata oggi su un vero processo. Approdo recente, e allo stesso tempo, simbolo di questa evoluzione, è sicuramente il Codice del processo amministrativo emanato con il d. lgs. 2 luglio 2010, n. 104. La ricerca, di cui questo contributo costituisce un resoconto, è iniziata contestualmente all’entrata in vigore del nuovo testo, e quindi è stata anche l’occasione per vederne le prime applicazioni giurisprudenziali. In particolare la lettura del Codice, prescindendo da una mera ricognizione di tutto il suo lungo articolato, è stata fatta alla luce di una ponderazione, nell’attualità, non solo del principio di effettività, ma anche del principio di strumentalità che lega tradizionalmente la fase cautelare con la fase di merito. I risultati della ricerca manifestano la volontà del legislatore di confermare questo rapporto strumentale, per fronteggiare una deriva incontrollata verso una cautela dagli effetti alle volte irreversibili, quale verificatasi nell’applicazione giurisprudenziale, ma contestualmente evidenziano la volontà di estendere la portata della tutela cautelare. Guardando a cosa sia diventata oggi la tutela cautelare, si è assistito ad un rafforzamento degli effetti conformativi, tipici delle sentenze di merito ma che si sono estesi alla fase cautelare. I giudici, pur consapevoli che la tutela cautelare non sia una risposta a cognizione piena, bensì sommaria, intendono comunque garantire una tutela tempestiva ed effettiva, anche per il tramite di tecniche processuali particolari, come quella del remand, cui, all’interno della ricerca, viene dedicato ampio spazio. Nella sua ultima parte la ricerca si è focalizzata, sempre volendo guardare in maniera globale agli effetti della tutela cautelare, sul momento dell’esecuzione e quindi sul giudizio di ottemperanza.
Resumo:
The inversion of seismo-volcanic events is performed to retrieve the source geometry and to determine volumetric budgets of the source. Such observations have shown to be an important tool for the seismological monitoring of volcanoes. We developed a novel technique for the non-linear constrained inversion of low frequency seismo-volcanic events. Unconstrained linear inversion methods work well when a dense network of broadband seismometers is available. We propose a new constrained inversion technique, which has shown to be efficient also in a reduced network configuration and a low signal-noise ratio. The waveform inversion is performed in the frequency domain, constraining the source mechanism during the event to vary only in its magnitude. The eigenvectors orientation and the eigenvalue ratio are kept constant. This significantly reduces the number of parameters to invert, making the procedure more stable. The method has been tested over a synthetic dataset, reproducing realistic very-long-period (VLP) signals of Stromboli volcano. The information obtained by performing the synthetic tests is used to assess the reliability of the results obtained on a VLP dataset recorded on Stromboli volcano and on a low frequency events recorded at Vesuvius volcano.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
rnThis thesis is on the flavor problem of Randall Sundrum modelsrnand their strongly coupled dual theories. These models are particularly wellrnmotivated extensions of the Standard Model, because they simultaneously address rntherngauge hierarchy problem and the hierarchies in the quarkrnmasses and mixings. In order to put this into context, special attention is given to concepts underlying therntheories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). ThernAdS/CFTrnduality is introduced and its implications for the Randall Sundrum model withrnfermions in the bulk andrngeneral bulk gauge groups is investigated. It will be shown that the differentrnterms in the general 5D propagator of a bulk gauge field can be related tornthe corresponding diagrams of the strongly coupled dual, which allows for arndeeperrnunderstanding of the origin of flavor changing neutral currents generated by thernexchange of the Kaluza Klein excitations of these bulk fields.rnIn the numerical analysis, different observables which are sensitive torncorrections from therntree-levelrnexchange of these resonances will be presented on the basis of updatedrnexperimental data from the Tevatron and LHC experiments. This includesrnelectroweak precision observables, namely corrections to the S and Trnparameters followed by corrections to the Zbb vertex, flavor changingrnobservables with flavor changes at one vertex, viz. BR (Bd -> mu+mu-) and BR (Bs -> mu+mu-), and two vertices,rn viz. S_psiphi and |eps_K|, as well as bounds from direct detectionrnexperiments. rnThe analysis will show that all of these bounds can be brought in agreement withrna new physics scale Lambda_NP in the TeV range, except for the CPrnviolating quantity |eps_K|, which requires Lambda_NP= Ord(10) TeVrnin the absencernof fine-tuning. The numerous modifications of the Randall Sundrum modelrnin the literature, which try to attenuate this bound are reviewed andrncategorized.rnrnSubsequently, a novel solution to this flavor problem, based on an extendedrncolor gauge group in the bulk and its thorough implementation inrnthe RS model, will be presented, as well as an analysis of the observablesrnmentioned above in the extended model. This solution is especially motivatedrnfromrnthe point of view of the strongly coupled dual theory and the implications forrnstrongly coupled models of new physics, which do not possess a holographic dual,rnare examined.rnFinally, the top quark plays a special role in models with a geometric explanation ofrnflavor hierarchies and the predictions in the Randall-Sundrum model with andrnwithout the proposed extension for the forward-backward asymmetryrnA_FB^trnin top pair production are computed.
Resumo:
In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.
Resumo:
Das Basisproblem von Arc-Routing Problemen mit mehreren Fahrzeugen ist das Capacitated Arc-Routing Problem (CARP). Praktische Anwendungen des CARP sind z.B. in den Bereichen Müllabfuhr und Briefzustellung zu finden. Das Ziel ist es, einen kostenminimalen Tourenplan zu berechnen, bei dem alle erforderlichen Kanten bedient werden und gleichzeitig die Fahrzeugkapazität eingehalten wird. In der vorliegenden Arbeit wird ein Cut-First Branch-and-Price Second Verfahren entwickelt. In der ersten Phase werden Schnittebenen generiert, die dem Master Problem in der zweiten Phase hinzugefügt werden. Das Subproblem ist ein kürzeste Wege Problem mit Ressourcen und wird gelöst um neue Spalten für das Master Problem zu liefern. Ganzzahlige CARP Lösungen werden durch ein neues hierarchisches Branching-Schema garantiert. Umfassende Rechenstudien zeigen die Effektivität dieses Algorithmus. Kombinierte Standort- und Arc-Routing Probleme ermöglichen eine realistischere Modellierung von Zustellvarianten bei der Briefzustellung. In dieser Arbeit werden jeweils zwei mathematische Modelle für Park and Loop und Park and Loop with Curbline vorgestellt. Die Modelle für das jeweilige Problem unterscheiden sich darin, wie zulässige Transfer Routen modelliert werden. Während der erste Modelltyp Subtour-Eliminationsbedingungen verwendet, werden bei dem zweiten Modelltyp Flussvariablen und Flusserhaltungsbedingungen eingesetzt. Die Rechenstudie zeigt, dass ein MIP-Solver den zweiten Modelltyp oft in kürzerer Rechenzeit lösen kann oder bei Erreichen des Zeitlimits bessere Zielfunktionswerte liefert.
Resumo:
Light pseudoscalar bosons, such as the axion that was originally proposed as a solution of the strong CP problem, would cause a new spin-dependent short-range interaction. In this thesis, an experiment is presented to search for axion mediated short-range interaction between a nucleon and the spin of a polarized bound neutron. This interaction cause a shift in the precession frequency of nuclear spin-polarized gases in the presence of an unpolarized mass. To get rid of magnetic field drifts co-located, nuclear spin polarized 3He and 129Xe atoms were used. The free nuclear spin precession frequencies were measured in a homogeneous magnetic guiding field of about 350nT using LTc SQUID detectors. The whole setup was housed in a magnetically shielded room at the Physikalisch Technische Bundesanstalt (PTB) in Berlin. With this setup long nuclear spin-coherence times, respectively, transverse relaxation times of 5h for 129Xe and 53h for 3He could be achieved. The results of the last run in September 2010 are presented which give new upper limits on the scalar-pseudoscalar coupling of axion-like particles in the axion-mass window from 10^(-2) eV to 10^(-6) eV. The laboratory upper bounds were improved by up to 4 orders of magnitude.
Resumo:
Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.
Resumo:
In der vorliegenden Arbeit wird die Variation abgeschlossener Unterräume eines Hilbertraumes untersucht, die mit isolierten Komponenten der Spektren von selbstadjungierten Operatoren unter beschränkten additiven Störungen assoziiert sind. Von besonderem Interesse ist hierbei die am wenigsten restriktive Bedingung an die Norm der Störung, die sicherstellt, dass die Differenz der zugehörigen orthogonalen Projektionen eine strikte Normkontraktion darstellt. Es wird ein Überblick über die bisher erzielten Resultate gegeben. Basierend auf einem Iterationsansatz wird eine allgemeine Schranke an die Variation der Unterräume für Störungen erzielt, die glatt von einem reellen Parameter abhängen. Durch Einführung eines Kopplungsparameters wird das Ergebnis auf den Fall additiver Störungen angewendet. Auf diese Weise werden zuvor bekannte Ergebnisse verbessert. Im Falle von additiven Störungen werden die Schranken an die Variation der Unterräume durch ein Optimierungsverfahren für die Stützstellen im Iterationsansatz weiter verschärft. Die zugehörigen Ergebnisse sind die besten, die bis zum jetzigen Zeitpunkt erzielt wurden.
Resumo:
Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergeräte aufrüsten, um alle bestehenden Prozesse und Signale pünktlich auszuführen. Die zeitlichen Anforderungen sind strikt und müssen in jeder periodischen Wiederkehr der Prozesse erfüllt sein, da die Sicherstellung der parallelen Ausführung von größter Bedeutung ist. Existierende Ansätze können schnell Designalternativen berechnen, aber sie gewährleisten nicht, dass die Kosten für die nötigen Hardwareänderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lösungen für das Problem berechnet, die alle zeitlichen Bedingungen erfüllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken während des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewährleistung der periodischen Ausführung verlagern sich durch eine Zerlegung des Hauptproblems in unabhängige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausführung als auch die Methoden zur Signalübertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren präsentieren wir eine neue Formulierung für die Ausführung mit fixierten Prioritäten, die zusätzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche für Szenarien nötig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden können, um die Optimalität von heuristischen Lösungen zu beweisen. Wenn wir optimale Lösungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenüber anderen Ansätzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlösungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.
Resumo:
The thesis presents a probabilistic approach to the theory of semigroups of operators, with particular attention to the Markov and Feller semigroups. The first goal of this work is the proof of the fundamental Feynman-Kac formula, which gives the solution of certain parabolic Cauchy problems, in terms of the expected value of the initial condition computed at the associated stochastic diffusion processes. The second target is the characterization of the principal eigenvalue of the generator of a semigroup with Markov transition probability function and of second order elliptic operators with real coefficients not necessarily self-adjoint. The thesis is divided into three chapters. In the first chapter we study the Brownian motion and some of its main properties, the stochastic processes, the stochastic integral and the Itô formula in order to finally arrive, in the last section, at the proof of the Feynman-Kac formula. The second chapter is devoted to the probabilistic approach to the semigroups theory and it is here that we introduce Markov and Feller semigroups. Special emphasis is given to the Feller semigroup associated with the Brownian motion. The third and last chapter is divided into two sections. In the first one we present the abstract characterization of the principal eigenvalue of the infinitesimal generator of a semigroup of operators acting on continuous functions over a compact metric space. In the second section this approach is used to study the principal eigenvalue of elliptic partial differential operators with real coefficients. At the end, in the appendix, we gather some of the technical results used in the thesis in more details. Appendix A is devoted to the Sion minimax theorem, while in appendix B we prove the Chernoff product formula for not necessarily self-adjoint operators.