971 resultados para formalism


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we develop further the functional renormalization group (RG) approach to quantum field theory (QFT) based on the effective average action (EAA) and on the exact flow equation that it satisfies. The EAA is a generalization of the standard effective action that interpolates smoothly between the bare action for krightarrowinfty and the standard effective action rnfor krightarrow0. In this way, the problem of performing the functional integral is converted into the problem of integrating the exact flow of the EAA from the UV to the IR. The EAA formalism deals naturally with several different aspects of a QFT. One aspect is related to the discovery of non-Gaussian fixed points of the RG flow that can be used to construct continuum limits. In particular, the EAA framework is a useful setting to search for Asymptotically Safe theories, i.e. theories valid up to arbitrarily high energies. A second aspect in which the EAA reveals its usefulness are non-perturbative calculations. In fact, the exact flow that it satisfies is a valuable starting point for devising new approximation schemes. In the first part of this thesis we review and extend the formalism, in particular we derive the exact RG flow equation for the EAA and the related hierarchy of coupled flow equations for the proper-vertices. We show how standard perturbation theory emerges as a particular way to iteratively solve the flow equation, if the starting point is the bare action. Next, we explore both technical and conceptual issues by means of three different applications of the formalism, to QED, to general non-linear sigma models (NLsigmaM) and to matter fields on curved spacetimes. In the main part of this thesis we construct the EAA for non-abelian gauge theories and for quantum Einstein gravity (QEG), using the background field method to implement the coarse-graining procedure in a gauge invariant way. We propose a new truncation scheme where the EAA is expanded in powers of the curvature or field strength. Crucial to the practical use of this expansion is the development of new techniques to manage functional traces such as the algorithm proposed in this thesis. This allows to project the flow of all terms in the EAA which are analytic in the fields. As an application we show how the low energy effective action for quantum gravity emerges as the result of integrating the RG flow. In any treatment of theories with local symmetries that introduces a reference scale, the question of preserving gauge invariance along the flow emerges as predominant. In the EAA framework this problem is dealt with the use of the background field formalism. This comes at the cost of enlarging the theory space where the EAA lives to the space of functionals of both fluctuation and background fields. In this thesis, we study how the identities dictated by the symmetries are modified by the introduction of the cutoff and we study so called bimetric truncations of the EAA that contain both fluctuation and background couplings. In particular, we confirm the existence of a non-Gaussian fixed point for QEG, that is at the heart of the Asymptotic Safety scenario in quantum gravity; in the enlarged bimetric theory space where the running of the cosmological constant and of Newton's constant is influenced by fluctuation couplings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, we study the phenomenology of selected observables in the context of the Randall-Sundrum scenario of a compactified warpedrnextra dimension. Gauge and matter fields are assumed to live in the whole five-dimensional space-time, while the Higgs sector is rnlocalized on the infrared boundary. An effective four-dimensional description is obtained via Kaluza-Klein decomposition of the five dimensionalrnquantum fields. The symmetry breaking effects due to the Higgs sector are treated exactly, and the decomposition of the theory is performedrnin a covariant way. We develop a formalism, which allows for a straight-forward generalization to scenarios with an extended gauge group comparedrnto the Standard Model of elementary particle physics. As an application, we study the so-called custodial Randall-Sundrum model and compare the resultsrnto that of the original formulation. rnWe present predictions for electroweak precision observables, the Higgs production cross section at the LHC, the forward-backward asymmetryrnin top-antitop production at the Tevatron, as well as the width difference, the CP-violating phase, and the semileptonic CP asymmetry in B_s decays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Untersuchung von dissipativen Quantensystemen erm¨oglicht es, Quantenph¨anomene auch auf makroskopischen L¨angenskalen zu beobachten. Das in dieser Dissertation gew¨ahlte mikroskopische Modell erlaubt es, den bisher nur ph¨anomenologisch zug¨anglichen Effekt der Quantendissipation mathematisch und physikalisch herzuleiten und zu untersuchen. Bei dem betrachteten mikroskopischen Modell handelt es sich um eine 1-dimensionale Kette von harmonischen Freiheitsgraden, die sowohl untereinander als auch an r anharmonische Freiheitsgrade gekoppelt sind. Die F¨alle einer, respektive zwei anharmonischer Bindungen werden in dieser Arbeit explizit betrachtet. Hierf¨ur wird eine analytische Trennung der harmonischen von den anharmonischen Freiheitsgraden auf zwei verschiedenen Wegen durchgef¨uhrt. Das anharmonische Potential wird als symmetrisches Doppelmuldenpotential gew¨ahlt, welches mit Hilfe der Wick Rotation die Berechnung der ¨Uberg¨ange zwischen beiden Minima erlaubt. Das Eliminieren der harmonischen Freiheitsgrade erfolgt mit Hilfe des wohlbekannten Feynman-Vernon Pfadintegral-Formalismus [21]. In dieser Arbeit wird zuerst die Positionsabh¨angigkeit einer anharmonischen Bindung im Tunnelverhalten untersucht. F¨ur den Fall einer fernab von den R¨andern lokalisierten anharmonischen Bindung wird ein Ohmsches dissipatives Tunneln gefunden, was bei der Temperatur T = 0 zu einem Phasen¨ubergang in Abh¨angigkeit einer kritischen Kopplungskonstanten Ccrit f¨uhrt. Dieser Phasen¨ubergang wurde bereits in rein ph¨anomenologisches Modellen mit Ohmscher Dissipation durch das Abbilden des Systems auf das Ising-Modell [26] erkl¨art. Wenn die anharmonische Bindung jedoch an einem der R¨ander der makroskopisch grossen Kette liegt, tritt nach einer vom Abstand der beiden anharmonischen Bindungen abh¨angigen Zeit tD ein ¨Ubergang von Ohmscher zu super- Ohmscher Dissipation auf, welche im Kern KM(τ ) klar sichtbar ist. F¨ur zwei anharmonische Bindungen spielt deren indirekteWechselwirkung eine entscheidende Rolle. Es wird gezeigt, dass der Abstand D beider Bindungen und die Wahl des Anfangs- und Endzustandes die Dissipation bestimmt. Unter der Annahme, dass beide anharmonischen Bindung gleichzeitig tunneln, wird eine Tunnelwahrscheinlichkeit p(t) analog zu [14], jedoch f¨ur zwei anharmonische Bindungen, berechnet. Als Resultat erhalten wir entweder Ohmsche Dissipation f¨ur den Fall, dass beide anharmonischen Bindungen ihre Gesamtl¨ange ¨andern, oder super-Ohmsche Dissipation, wenn beide anharmonischen Bindungen durch das Tunneln ihre Gesamtl¨ange nicht ¨andern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work I reported recent results in the field of Statistical Mechanics of Equilibrium, and in particular in Spin Glass models and Monomer Dimer models . We start giving the mathematical background and the general formalism for Spin (Disordered) Models with some of their applications to physical and mathematical problems. Next we move on general aspects of the theory of spin glasses, in particular to the Sherrington-Kirkpatrick model which is of fundamental interest for the work. In Chapter 3, we introduce the Multi-species Sherrington-Kirkpatrick model (MSK), we prove the existence of the thermodynamical limit and the Guerra's Bound for the quenched pressure together with a detailed analysis of the annealed and the replica symmetric regime. The result is a multidimensional generalization of the Parisi's theory. Finally we brie y illustrate the strategy of the Panchenko's proof of the lower bound. In Chapter 4 we discuss the Aizenmann-Contucci and the Ghirlanda-Guerra identities for a wide class of Spin Glass models. As an example of application, we discuss the role of these identities in the proof of the lower bound. In Chapter 5 we introduce the basic mathematical formalism of Monomer Dimer models. We introduce a Gaussian representation of the partition function that will be fundamental in the rest of the work. In Chapter 6, we introduce an interacting Monomer-Dimer model. Its exact solution is derived and a detailed study of its analytical properties and related physical quantities is performed. In Chapter 7, we introduce a quenched randomness in the Monomer Dimer model and show that, under suitable conditions the pressure is a self averaging quantity. The main result is that, if we consider randomness only in the monomer activity, the model is exactly solvable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the different approaches for a construction of a fundamental quantum theory of gravity the Asymptotic Safety scenario conjectures that quantum gravity can be defined within the framework of conventional quantum field theory, but only non-perturbatively. In this case its high energy behavior is controlled by a non-Gaussian fixed point of the renormalization group flow, such that its infinite cutoff limit can be taken in a well defined way. A theory of this kind is referred to as non-perturbatively renormalizable. In the last decade a considerable amount of evidence has been collected that in four dimensional metric gravity such a fixed point, suitable for the Asymptotic Safety construction, indeed exists. This thesis extends the Asymptotic Safety program of quantum gravity by three independent studies that differ in the fundamental field variables the investigated quantum theory is based on, but all exhibit a gauge group of equivalent semi-direct product structure. It allows for the first time for a direct comparison of three asymptotically safe theories of gravity constructed from different field variables. The first study investigates metric gravity coupled to SU(N) Yang-Mills theory. In particular the gravitational effects to the running of the gauge coupling are analyzed and its implications for QED and the Standard Model are discussed. The second analysis amounts to the first investigation on an asymptotically safe theory of gravity in a pure tetrad formulation. Its renormalization group flow is compared to the corresponding approximation of the metric theory and the influence of its enlarged gauge group on the UV behavior of the theory is analyzed. The third study explores Asymptotic Safety of gravity in the Einstein-Cartan setting. Here, besides the tetrad, the spin connection is considered a second fundamental field. The larger number of independent field components and the enlarged gauge group render any RG analysis of this system much more difficult than the analog metric analysis. In order to reduce the complexity of this task a novel functional renormalization group equation is proposed, that allows for an evaluation of the flow in a purely algebraic manner. As a first example of its suitability it is applied to a three dimensional truncation of the form of the Holst action, with the Newton constant, the cosmological constant and the Immirzi parameter as its running couplings. A detailed comparison of the resulting renormalization group flow to a previous study of the same system demonstrates the reliability of the new equation and suggests its use for future studies of extended truncations in this framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work considers the reconstruction of strong gravitational lenses from their observed effects on the light distribution of background sources. After reviewing the formalism of gravitational lensing and the most common and relevant lens models, new analytical results on the elliptical power law lens are presented, including new expressions for the deflection, potential, shear and magnification, which naturally lead to a fast numerical scheme for practical calculation. The main part of the thesis investigates lens reconstruction with extended sources by means of the forward reconstruction method, in which the lenses and sources are given by parametric models. The numerical realities of the problem make it necessary to find targeted optimisations for the forward method, in order to make it feasible for general applications to modern, high resolution images. The result of these optimisations is presented in the \textsc{Lensed} algorithm. Subsequently, a number of tests for general forward reconstruction methods are created to decouple the influence of sourced from lens reconstructions, in order to objectively demonstrate the constraining power of the reconstruction. The final chapters on lens reconstruction contain two sample applications of the forward method. One is the analysis of images from a strong lensing survey. Such surveys today contain $\sim 100$ strong lenses, and much larger sample sizes are expected in the future, making it necessary to quickly and reliably analyse catalogues of lenses with a fixed model. The second application deals with the opposite situation of a single observation that is to be confronted with different lens models, where the forward method allows for natural model-building. This is demonstrated using an example reconstruction of the ``Cosmic Horseshoe''. An appendix presents an independent work on the use of weak gravitational lensing to investigate theories of modified gravity which exhibit screening in the non-linear regime of structure formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die causa finalis der vorliegenden Arbeit ist das Verständnis des Phasendiagramms von Wasserstoff bei ultrahohen Drücken, welche von nichtleitendem H2 bis hin zu metallischem H reichen. Da die Voraussetzungen für ultrahohen Druck im Labor schwer zu schaffen sind, bilden Computersimulationen ein wichtiges alternatives Untersuchungsinstrument. Allerdings sind solche Berechnungen eine große Herausforderung. Eines der größten Probleme ist die genaue Auswertung des Born-Oppenheimer Potentials, welches sowohl für die nichtleitende als auch für die metallische Phase geeignet sein muss. Außerdem muss es die starken Korrelationen berücksichtigen, die durch die kovalenten H2 Bindungen und die eventuellen Phasenübergänge hervorgerufen werden. Auf dieses Problem haben unsere Anstrengungen abgezielt. Im Kontext von Variationellem Monte Carlo (VMC) ist die Shadow Wave Function (SWF) eine sehr vielversprechende Option. Aufgrund ihrer Flexibilität sowohl lokalisierte als auch delokalisierte Systeme zu beschreiben sowie ihrer Fähigkeit Korrelationen hoher Ordnung zu berücksichtigen, ist sie ein idealer Kandidat für unsere Zwecke. Unglücklicherweise bringt ihre Formulierung ein Vorzeichenproblem mit sich, was die Anwendbarkeit limitiert. Nichtsdestotrotz ist es möglich diese Schwierigkeit zu umgehen indem man die Knotenstruktur a priori festlegt. Durch diesen Formalismus waren wir in der Lage die Beschreibung der Elektronenstruktur von Wasserstoff signifikant zu verbessern, was eine sehr vielversprechende Perspektive bietet. Während dieser Forschung haben wir also die Natur des Vorzeichenproblems untersucht, das sich auf die SWF auswirkt, und dabei ein tieferes Verständnis seines Ursprungs erlangt. Die vorliegende Arbeit ist in vier Kapitel unterteilt. Das erste Kapitel führt VMC und die SWF mit besonderer Ausrichtung auf fermionische Systeme ein. Kapitel 2 skizziert die Literatur über das Phasendiagramm von Wasserstoff bei ultrahohem Druck. Das dritte Kapitel präsentiert die Implementierungen unseres VMC Programms und die erhaltenen Ergebnisse. Zum Abschluss fasst Kapitel 4 unsere Bestrebungen zur Lösung des zur SWF zugehörigen Vorzeichenproblems zusammen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary goal of this work is related to the extension of an analytic electro-optical model. It will be used to describe single-junction crystalline silicon solar cells and a silicon/perovskite tandem solar cell in the presence of light-trapping in order to calculate efficiency limits for such a device. In particular, our tandem system is composed by crystalline silicon and a perovskite structure material: metilammoniumleadtriiodide (MALI). Perovskite are among the most convenient materials for photovoltaics thanks to their reduced cost and increasing efficiencies. Solar cell efficiencies of devices using these materials increased from 3.8% in 2009 to a certified 20.1% in 2014 making this the fastest-advancing solar technology to date. Moreover, texturization increases the amount of light which can be absorbed through an active layer. Using Green’s formalism it is possible to calculate the photogeneration rate of a single-layer structure with Lambertian light trapping analytically. In this work we go further: we study the optical coupling between the two cells in our tandem system in order to calculate the photogeneration rate of the whole structure. We also model the electronic part of such a device by considering the perovskite top cell as an ideal diode and solving the drift-diffusion equation with appropriate boundary conditions for the silicon bottom cell. We have a four terminal structure, so our tandem system is totally unconstrained. Then we calculate the efficiency limits of our tandem including several recombination mechanisms such as Auger, SRH and surface recombination. We focus also on the dependence of the results on the band gap of the perovskite and we calculare an optimal band gap to optimize the tandem efficiency. The whole work has been continuously supported by a numerical validation of out analytic model against Silvaco ATLAS which solves drift-diffusion equations using a finite elements method. Our goal is to develop a simpler and cheaper, but accurate model to study such devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The group analysed some syntactic and phonological phenomena that presuppose the existence of interrelated components within the lexicon, which motivate the assumption that there are some sublexicons within the global lexicon of a speaker. This result is confirmed by experimental findings in neurolinguistics. Hungarian speaking agrammatic aphasics were tested in several ways, the results showing that the sublexicon of closed-class lexical items provides a highly automated complex device for processing surface sentence structure. Analysing Hungarian ellipsis data from a semantic-syntactic aspect, the group established that the lexicon is best conceived of being as split into at least two main sublexicons: the store of semantic-syntactic feature bundles and a separate store of sound forms. On this basis they proposed a format for representing open-class lexical items whose meanings are connected via certain semantic relations. They also proposed a new classification of verbs to account for the contribution of the aspectual reading of the sentence depending on the referential type of the argument, and a new account of the syntactic and semantic behaviour of aspectual prefixes. The partitioned sets of lexical items are sublexicons on phonological grounds. These sublexicons differ in terms of phonotactic grammaticality. The degrees of phonotactic grammaticality are tied up with the problem of psychological reality, of how many degrees of this native speakers are sensitive to. The group developed a hierarchical construction network as an extension of the original General Inheritance Network formalism and this framework was then used as a platform for the implementation of the grammar fragments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Custom modes at a wavelength of 1064 nm were generated with a deformable mirror. The required surface deformations of the adaptive mirror were calculated with the Collins integral written in a matrix formalism. The appropriate size and shape of the actuators as well as the needed stroke were determined to ensure that the surface of the controllable mirror matches the phase front of the custom modes. A semipassive bimorph adaptive mirror with five concentric ring-shaped actuators and one defocus actuator was manufactured and characterised. The surface deformation was modelled with the response functions of the adaptive mirror in terms of an expansion with Zernike polynomials. In the experiments the Nd:YAG laser crystal was quasi-CW pumped to avoid thermally induced distortions of the phase front. The adaptive mirror allows to switch between a super-Gaussian mode, a doughnut mode, a Hermite-Gaussian fundamental beam, multi-mode operation or no oscillation in real time during laser operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The craze for faster and smaller electronic devices has never gone down and this has always kept researchers on their toes. Following Moore’s law, which states that the number of transistors in a single chip will double in every 18 months, today “30 million transistors can fit into the head of a 1.5 mm diameter pin”. But this miniaturization cannot continue indefinitely due to the ‘quantum leakage’ limit in the thickness of the insulating layer between the gate electrode and the current carrying channel. To bypass this limitation, scientists came up with the idea of using vastly available organic molecules as components in an electronic device. One of the primary challenges in this field was the ability to perform conductance measurements across single molecular junctions. Once that was achieved the focus shifted to a deeper understanding of the underlying physics behind the electron transport across these molecular scale devices. Our initial theoretical approach is based on the conventional Non-Equilibrium Green Function(NEGF) formulation, but the self-energy of the leads is modified to include a weighting factor that ensures negligible current in the absence of a molecular pathway as observed in a Mechanically Controlled Break Junction (MCBJ) experiment. The formulation is then made parameter free by a more careful estimation of the self-energy of the leads. The calculated conductance turns out to be atleast an order more than the experimental values which is probably due to a strong chemical bond at the metal-molecule junction unlike in the experiments. The focus is then shifted to a comparative study of charge transport in molecular wires of different lengths within the same formalism. The molecular wires, composed of a series of organic molecules, are sanwiched between two gold electrodes to make a two terminal device. The length of the wire is increased by sequentially increasing the number of molecules in the wire from 1 to 3. In the low bias regime all the molecular devices are found to exhibit Ohmic behavior. However, the magnitude of conductance decreases exponentially with increase in length of the wire. In the next study, the relative contribution of the ‘in-phase’ and the ‘out-of-phase’ components of the total electronic current under the influence of an external bias is estimated for the wires of three different lengths. In the low bias regime, the ‘out-of-phase’ contribution to the total current is minimal and the ‘in-phase’ elastic tunneling of the electrons is responsible for the net electronic current. This is true irrespective of the length of the molecular spacer. In this regime, the current-voltage characteristics follow Ohm’s law and the conductance of the wires is found to decrease exponentially with increase in length which is in agreement with experimental results. However, after a certain ‘off-set’ voltage, the current increases non-linearly with bias and the ‘out-of-phase’ tunneling of electrons reduces the net current substantially. Subsequently, the interaction of conduction electrons with the vibrational modes as a function of external bias in the three different oligomers is studied since they are one of the main sources of phase-breaking scattering. The number of vibrational modes that couple strongly with the frontier molecular orbitals are found to increase with length of the spacer and the external field. This is consistent with the existence of lowest ‘off-set’ voltage for the longest wire under study.