941 resultados para Classical super-integrable field theory
Resumo:
A horizontal fluid layer heated from below in the presence of a vertical magnetic field is considered. A simple asymptotic analysis is presented which demonstrates that a convection mode attached to the side walls of the layer sets in at Rayleigh numbers much below those required for the onset of convection in the bulk of the layer. The analysis complements an earlier analysis by Houchens [J. Fluid Mech. 469, 189 (2002)] which derived expressions for the critical Rayleigh number for the onset of convection in a vertical cylinder with an axial magnetic field in the cases of two aspect ratios. © 2008 American Institute of Physics.
Deformation Lemma, Ljusternik-Schnirellmann Theory and Mountain Pass Theorem on C1-Finsler Manifolds
Resumo:
∗Partially supported by Grant MM409/94 Of the Ministy of Science and Education, Bulgaria. ∗∗Partially supported by Grant MM442/94 of the Ministy of Science and Education, Bulgaria.
Resumo:
2000 Mathematics Subject Classification: 13N15, 13A50, 16W25.
Resumo:
2000 Mathematics Subject Classification: 13N15, 13A50, 13F20.
Resumo:
We present a review of the latest developments in one-dimensional (1D) optical wave turbulence (OWT). Based on an original experimental setup that allows for the implementation of 1D OWT, we are able to show that an inverse cascade occurs through the spontaneous evolution of the nonlinear field up to the point when modulational instability leads to soliton formation. After solitons are formed, further interaction of the solitons among themselves and with incoherent waves leads to a final condensate state dominated by a single strong soliton. Motivated by the observations, we develop a theoretical description, showing that the inverse cascade develops through six-wave interaction, and that this is the basic mechanism of nonlinear wave coupling for 1D OWT. We describe theory, numerics and experimental observations while trying to incorporate all the different aspects into a consistent context. The experimental system is described by two coupled nonlinear equations, which we explore within two wave limits allowing for the expression of the evolution of the complex amplitude in a single dynamical equation. The long-wave limit corresponds to waves with wave numbers smaller than the electrical coherence length of the liquid crystal, and the opposite limit, when wave numbers are larger. We show that both of these systems are of a dual cascade type, analogous to two-dimensional (2D) turbulence, which can be described by wave turbulence (WT) theory, and conclude that the cascades are induced by a six-wave resonant interaction process. WT theory predicts several stationary solutions (non-equilibrium and thermodynamic) to both the long- and short-wave systems, and we investigate the necessary conditions required for their realization. Interestingly, the long-wave system is close to the integrable 1D nonlinear Schrödinger equation (NLSE) (which contains exact nonlinear soliton solutions), and as a result during the inverse cascade, nonlinearity of the system at low wave numbers becomes strong. Subsequently, due to the focusing nature of the nonlinearity, this leads to modulational instability (MI) of the condensate and the formation of solitons. Finally, with the aid of the probability density function (PDF) description of WT theory, we explain the coexistence and mutual interactions between solitons and the weakly nonlinear random wave background in the form of a wave turbulence life cycle (WTLC).
Resumo:
Purpose – The paper aims to explore the gap between theory and practice in foresight and to give some suggestions on how to reduce it. Design/methodology/approach – Analysis of practical foresight activities and suggestions are based on a literature review, the author's own research and practice in the field of foresight and futures studies, and her participation in the work of a European project (COST A22). Findings – Two different types of practical foresight activities have developed. One of them, the practice of foresight of critical futures studies (FCFS) is an application of a theory of futures studies. The other, termed here as praxis foresight (PF), has no theoretical basis and responds directly to practical needs. At present a gap can be perceived between theory and practice. PF distinguishes itself from the practice and theory of FCFS and narrows the construction space of futures. Neither FCFS nor PF deals with content issues of the outer world. Reducing the gap depends on renewal of joint discourses and research about experience of different practical foresight activities and manageability of complex dynamics in foresight. Production and feedback of self-reflective and reflective foresight knowledge could improve theory and practice. Originality/value – Contemporary practical foresight activities are analysed and suggestions to reduce the gap are developed in the context of the linkage between theory and practice. This paper is thought provoking for futurists, foresight managers and university researchers.
Resumo:
Chains of interacting non-Abelian anyons with local interactions invariant under the action of the Drinfeld double of the dihedral group D-3 are constructed. Formulated as a spin chain the Hamiltonians are generated from commuting transfer matrices of an integrable vertex model for periodic and braided as well as open boundaries. A different anyonic model with the same local Hamiltonian is obtained within the fusion path formulation. This model is shown to be related to an integrable fusion interaction round the face model. Bulk and surface properties of the anyon chain are computed from the Bethe equations for the spin chain. The low-energy effective theories and operator content of the models (in both the spin chain and fusion path formulation) are identified from analytical and numerical studies of the finite-size spectra. For all boundary conditions considered the continuum theory is found to be a product of two conformal field theories. Depending on the coupling constants the factors can be a Z(4) parafermion or a M-(5,M-6) minimal model.
Resumo:
This thesis describes a collection of studies into the electrical response of a III-V MOS stack comprising metal/GaGdO/GaAs layers as a function of fabrication process variables and the findings of those studies. As a result of this work, areas of improvement in the gate process module of a III-V heterostructure MOSFET were identified. Compared to traditional bulk silicon MOSFET design, one featuring a III-V channel heterostructure with a high-dielectric-constant oxide as the gate insulator provides numerous benefits, for example: the insulator can be made thicker for the same capacitance, the operating voltage can be made lower for the same current output, and improved output characteristics can be achieved without reducing the channel length further. It is known that transistors composed of III-V materials are most susceptible to damage induced by radiation and plasma processing. These devices utilise sub-10 nm gate dielectric films, which are prone to contamination, degradation and damage. Therefore, throughout the course of this work, process damage and contamination issues, as well as various techniques to mitigate or prevent those have been investigated through comparative studies of III-V MOS capacitors and transistors comprising various forms of metal gates, various thicknesses of GaGdO dielectric, and a number of GaAs-based semiconductor layer structures. Transistors which were fabricated before this work commenced, showed problems with threshold voltage control. Specifically, MOSFETs designed for normally-off (VTH > 0) operation exhibited below-zero threshold voltages. With the results obtained during this work, it was possible to gain an understanding of why the transistor threshold voltage shifts as the gate length decreases and of what pulls the threshold voltage downwards preventing normally-off device operation. Two main culprits for the negative VTH shift were found. The first was radiation damage induced by the gate metal deposition process, which can be prevented by slowing down the deposition rate. The second was the layer of gold added on top of platinum in the gate metal stack which reduces the effective work function of the whole gate due to its electronegativity properties. Since the device was designed for a platinum-only gate, this could explain the below zero VTH. This could be prevented either by using a platinum-only gate, or by matching the layer structure design and the actual gate metal used for the future devices. Post-metallisation thermal anneal was shown to mitigate both these effects. However, if post-metallisation annealing is used, care should be taken to ensure it is performed before the ohmic contacts are formed as the thermal treatment was shown to degrade the source/drain contacts. In addition, the programme of studies this thesis describes, also found that if the gate contact is deposited before the source/drain contacts, it causes a shift in threshold voltage towards negative values as the gate length decreases, because the ohmic contact anneal process affects the properties of the underlying material differently depending on whether it is covered with the gate metal or not. In terms of surface contamination; this work found that it causes device-to-device parameter variation, and a plasma clean is therefore essential. This work also demonstrated that the parasitic capacitances in the system, namely the contact periphery dependent gate-ohmic capacitance, plays a significant role in the total gate capacitance. This is true to such an extent that reducing the distance between the gate and the source/drain ohmic contacts in the device would help with shifting the threshold voltages closely towards the designed values. The findings made available by the collection of experiments performed for this work have two major applications. Firstly, these findings provide useful data in the study of the possible phenomena taking place inside the metal/GaGdO/GaAs layers and interfaces as the result of chemical processes applied to it. In addition, these findings allow recommendations as to how to best approach fabrication of devices utilising these layers.
Resumo:
Metamamterials are 1D, 2D or 3D arrays of articial atoms. The articial atoms, called "meta-atoms", can be any component with tailorable electromagnetic properties, such as resonators, LC circuits, nano particles, and so on. By designing the properties of individual meta-atoms and the interaction created by putting them in a lattice, one can create a metamaterial with intriguing properties not found in nature. My Ph. D. work examines the meta-atoms based on radio frequency superconducting quantum interference devices (rf-SQUIDs); their tunability with dc magnetic field, rf magnetic field, and temperature are studied. The rf-SQUIDs are superconducting split ring resonators in which the usual capacitance is supplemented with a Josephson junction, which introduces strong nonlinearity in the rf properties. At relatively low rf magnetic field, a magnetic field tunability of the resonant frequency of up to 80 THz/Gauss by dc magnetic field is observed, and a total frequency tunability of 100% is achieved. The macroscopic quantum superconducting metamaterial also shows manipulative self-induced broadband transparency due to a qualitatively novel nonlinear mechanism that is different from conventional electromagnetically induced transparency (EIT) or its classical analogs. A near complete disappearance of resonant absorption under a range of applied rf flux is observed experimentally and explained theoretically. The transparency comes from the intrinsic bi-stability and can be tuned on/ off easily by altering rf and dc magnetic fields, temperature and history. Hysteretic in situ 100% tunability of transparency paves the way for auto-cloaking metamaterials, intensity dependent filters, and fast-tunable power limiters. An rf-SQUID metamaterial is shown to have qualitatively the same behavior as a single rf-SQUID with regards to dc flux, rf flux and temperature tuning. The two-tone response of self-resonant rf-SQUID meta-atoms and metamaterials is then studied here via intermodulation (IM) measurement over a broad range of tone frequencies and tone powers. A sharp onset followed by a surprising strongly suppressed IM region near the resonance is observed. This behavior can be understood employing methods in nonlinear dynamics; the sharp onset, and the gap of IM, are due to sudden state jumps during a beat of the two-tone sum input signal. The theory predicts that the IM can be manipulated with tone power, center frequency, frequency difference between the two tones, and temperature. This quantitative understanding potentially allows for the design of rf-SQUID metamaterials with either very low or very high IM response.
Resumo:
The equivalence of the noncommutative U(N) quantum field theories related by the θ-exact Seiberg-Witten maps is, in this paper, proven to all orders in the perturbation theory with respect to the coupling constant. We show that this holds for super Yang-Mills theories with N=0, 1, 2, 4 supersymmetry. A direct check of this equivalence relation is performed by computing the one-loop quantum corrections to the quadratic part of the effective action in the noncommutative U(1) gauge theory with N=0, 1, 2, 4 supersymmetry.
Resumo:
In this third Quantum Interaction (QI) meeting it is time to examine our failures. One of the weakest elements of QI as a field, arises in its continuing lack of models displaying proper evolutionary dynamics. This paper presents an overview of the modern generalised approach to the derivation of time evolution equations in physics, showing how the notion of symmetry is essential to the extraction of operators in quantum theory. The form that symmetry might take in non-physical models is explored, with a number of viable avenues identified.
Resumo:
The purpose of this study is to investigate how secondary school media educators might best meet the needs of students who prefer practical production work to ‘theory’ work in media studies classrooms. This is a significant problem for a curriculum area that claims to develop students’ media literacies by providing them with critical frameworks and a metalanguage for thinking about the media. It is a problem that seems to have become more urgent with the availability of new media technologies and forms like video games. The study is located in the field of media education, which tends to draw on structuralist understandings of the relationships between young people and media and suggests that students can be empowered to resist media’s persuasive discourses. Recent theoretical developments suggest too little emphasis has been placed on the participatory aspects of young people playing with, creating and gaining pleasure from media. This study contributes to this ‘participatory’ approach by bringing post structuralist perspectives to the field, which have been absent from studies of secondary school media education. I suggest theories of media learning must take account of the ongoing formation of students’ subjectivities as they negotiate social, cultural and educational norms. Michel Foucault’s theory of ‘technologies of the self’ and Judith Butler’s theories of performativity and recognition are used to develop an argument that media learning occurs in the context of students negotiating various ‘ethical systems’ as they establish their social viability through achieving recognition within communities of practice. The concept of ‘ethical systems’ has been developed for this study by drawing on Foucault’s theories of discourse and ‘truth regimes’ and Butler’s updating of Althusser’s theory of interpellation. This post structuralist approach makes it possible to investigate the ways in which students productively repeat and vary norms to creatively ‘do’ and ‘undo’ the various media learning activities with which they are required to engage. The study focuses on a group of year ten students in an all boys’ Catholic urban school in Australia who undertook learning about video games in a three-week intensive ‘immersion’ program. The analysis examines the ethical systems operating in the classroom, including formal systems of schooling, informal systems of popular cultural practice and systems of masculinity. It also examines the students’ use of semiotic resources to repeat and/or vary norms while reflecting on, discussing, designing and producing video games. The key findings of the study are that students are motivated to learn technology skills and production processes rather than ‘theory’ work. This motivation stems from the students’ desire to become recognisable in communities of technological and masculine practice. However, student agency is not only possible through critical responses to media, but through performative variation of norms through creative ethical practices as students participate with new media technologies. Therefore, the opportunities exist for media educators to create the conditions for variation of norms through production activities. The study offers several implications for media education theory and practice including: the productive possibilities of post structuralism for informing ways of doing media education; the importance of media teachers having the autonomy to creatively plan curriculum; the advantages of media and technology teachers collaborating to draw on a broad range of resources to develop curriculum; the benefits of placing more emphasis on students’ creative uses of media; and the advantages of blending formal classroom approaches to media education with less formal out of school experiences.
Resumo:
The experimental literature and studies using survey data have established that people care a great deal about their relative economic position and not solely, as standard economic theory assumes, about their absolute economic position. Individuals are concerned about social comparisons. However, behavioral evidence in the field is rare. This paper provides an empirical analysis, testing the model of inequality aversion using two unique panel data sets for basketball and soccer players. We find support that the concept of inequality aversion helps to understand how the relative income situation affects performance in a real competitive environment with real tasks and real incentives.
Resumo:
The most costly operations encountered in pairing computations are those that take place in the full extension field Fpk . At high levels of security, the complexity of operations in Fpk dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller’s algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller’s algorithm. Unfortunately, almost all of these improvements tend to suffer for larger embedding degrees where the expensive extension field operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller’s algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.
Resumo:
Miller’s algorithm for computing pairings involves perform- ing multiplications between elements that belong to different finite fields. Namely, elements in the full extension field Fpk are multiplied by elements contained in proper subfields F pk/d , and by elements in the base field Fp . We show that significant speedups in pairing computations can be achieved by delaying these “mismatched” multiplications for an optimal number of iterations. Importantly, we show that our technique can be easily integrated into traditional pairing algorithms; implementers can exploit the computational savings herein by applying only minor changes to existing pairing code.