929 resultados para Heisenberg uncertainty principle
Resumo:
169 p.
Resumo:
Using the generative processes developed over two stages of creative development and the performance of The Physics Project at the Loft at the Creative Industries Precinct at the Queensland University of Technology (QUT) from 5th – 8th April 2006 as a case study, this exegesis considers how the principles of contemporary physics can be reframed as aesthetic principles in the creation of contemporary performance. The Physics Project is an original performance work that melds live performance, video and web-casting and overlaps an exploration of personal identity with the physics of space, time, light and complementarity. It considers the acts of translation between the language of physics and the language of contemporary performance that occur via process and form. This exegesis also examines the devices in contemporary performance making and contemporary performance that extend the reach of the performance, including the integration of the live and the mediated and the use of metanarratives.
Resumo:
We define lacunary Fourier series on a compact connected semisimple Lie group G. If f is an element of L-1 (G) has lacunary Fourier series and f vanishes on a non empty open subset of G, then we prove that f vanishes identically. This result can be viewed as a qualitative uncertainty principle.
Montagem de um conjunto experimental destinado à verificação do princípio da incerteza de Heisenberg
Resumo:
In this paper we present the an experimental setup to check the Heisenberg uncertainty principle. The description of the experimental setup and of the theoretical foundations is aimed at promoting the familiarization of the students with the involved concepts.
Resumo:
We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction. (c) 2012 Optical Society of America
Resumo:
The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.
Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.
The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.
The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.
In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.
Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.
The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.
The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.
Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.
Resumo:
We present a succinct review of the canonical formalism of classical mechanics, followed by a brief review of the main representations of quantum mechanics. We emphasize the formal similarities between the corresponding equations. We notice that these similarities contributed to the formulation of quantum mechanics. Of course, the driving force behind the search of any new physics is based on experimental evidence
Resumo:
We propose a novel analysis alternative, based on two Fourier Transforms for emotion recognition from speech -- Fourier analysis allows for display and synthesizes different signals, in terms of power spectral density distributions -- A spectrogram of the voice signal is obtained performing a short time Fourier Transform with Gaussian windows, this spectrogram portraits frequency related features, such as vocal tract resonances and quasi-periodic excitations during voiced sounds -- Emotions induce such characteristics in speech, which become apparent in spectrogram time-frequency distributions -- Later, the signal time-frequency representation from spectrogram is considered an image, and processed through a 2-dimensional Fourier Transform in order to perform the spatial Fourier analysis from it -- Finally features related with emotions in voiced speech are extracted and presented
Resumo:
The standard approach to signal reconstruction in frequency-domain optical-coherence tomography (FDOCT) is to apply the inverse Fourier transform to the measurements. This technique offers limited resolution (due to Heisenberg's uncertainty principle). We propose a new super-resolution reconstruction method based on a parametric representation. We consider multilayer specimens, wherein each layer has a constant refractive index and show that the backscattered signal from such a specimen fits accurately in to the framework of finite-rate-of-innovation (FRI) signal model and is represented by a finite number of free parameters. We deploy the high-resolution Prony method and show that high-quality, super-resolved reconstruction is possible with fewer measurements (about one-fourth of the number required for the standard Fourier technique). To further improve robustness to noise in practical scenarios, we take advantage of an iterated singular-value decomposition algorithm (Cadzow denoiser). We present results of Monte Carlo analyses, and assess statistical efficiency of the reconstruction techniques by comparing their performance against the Cramer-Rao bound. Reconstruction results on experimental data obtained from technical as well as biological specimens show a distinct improvement in resolution and signal-to-reconstruction noise offered by the proposed method in comparison with the standard approach.
Resumo:
We address the question, does a system A being entangled with another system B, put any constraints on the Heisenberg uncertainty relation (or the Schrodinger-Robertson inequality)? We find that the equality of the uncertainty relation cannot be reached for any two noncommuting observables, for finite dimensional Hilbert spaces if the Schmidt rank of the entangled state is maximal. One consequence is that the lower bound of the uncertainty relation can never be attained for any two observables for qubits, if the state is entangled. For infinite-dimensional Hilbert space too, we show that there is a class of physically interesting entangled states for which no two noncommuting observables can attain the minimum uncertainty equality.
Resumo:
We address the question, does a system A being entangled with another system B, put any constraints on the Heisenberg uncertainty relation (or the Schrodinger-Robertson inequality)? We find that the equality of the uncertainty relation cannot be reached for any two noncommuting observables, for finite dimensional Hilbert spaces if the Schmidt rank of the entangled state is maximal. One consequence is that the lower bound of the uncertainty relation can never be attained for any two observables for qubits, if the state is entangled. For infinite-dimensional Hilbert space too, we show that there is a class of physically interesting entangled states for which no two noncommuting observables can attain the minimum uncertainty equality.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
Hydrogen is the only atom for which the Schr odinger equation is solvable. Consisting only of a proton and an electron, hydrogen is the lightest element and, nevertheless, is far from being simple. Under ambient conditions, it forms diatomic molecules H2 in gas phase, but di erent temperature and pressures lead to a complex phase diagram, which is not completely known yet. Solid hydrogen was rst documented in 1899 [1] and was found to be isolating. At higher pressures, however, hydrogen can be metallized. In 1935 Wigner and Huntington predicted that the metallization pressure would be 25 GPa [2], where molecules would disociate to form a monoatomic metal, as alkali metals that lie below hydrogen in the periodic table. The prediction of the metallization pressure turned out to be wrong: metallic hydrogen has not been found yet, even under a pressure as high as 320 GPa. Nevertheless, extrapolations based on optical measurements suggest that a metallic phase may be attained at 450 GPa [3]. The interest of material scientist in metallic hydrogen can be attributed, at least to a great extent, to Ashcroft, who in 1968 suggested that such a system could be a hightemperature superconductor [4]. The temperature at which this material would exhibit a transition from a superconducting to a non-superconducting state (Tc) was estimated to be around room temperature. The implications of such a statement are very interesting in the eld of astrophysics: in planets that contain a big quantity of hydrogen and whose temperature is below Tc, superconducting hydrogen may be found, specially at the center, where the gravitational pressure is high. This might be the case of Jupiter, whose proportion of hydrogen is about 90%. There are also speculations suggesting that the high magnetic eld of Jupiter is due to persistent currents related to the superconducting phase [5]. Metallization and superconductivity of hydrogen has puzzled scientists for decades, and the community is trying to answer several questions. For instance, what is the structure of hydrogen at very high pressures? Or a more general one: what is the maximum Tc a phonon-mediated superconductor can have [6]? A great experimental e ort has been carried out pursuing metallic hydrogen and trying to answer the questions above; however, the characterization of solid phases of hydrogen is a hard task. Achieving the high pressures needed to get the sought phases requires advanced technologies. Diamond anvil cells (DAC) are commonly used devices. These devices consist of two diamonds with a tip of small area; for this reason, when a force is applied, the pressure exerted is very big. This pressure is uniaxial, but it can be turned into hydrostatic pressure using transmitting media. Nowadays, this method makes it possible to reach pressures higher than 300 GPa, but even at this pressure hydrogen does not show metallic properties. A recently developed technique that is an improvement of DAC can reach pressures as high as 600 GPa [7], so it is a promising step forward in high pressure physics. Another drawback is that the electronic density of the structures is so low that X-ray di raction patterns have low resolution. For these reasons, ab initio studies are an important source of knowledge in this eld, within their limitations. When treating hydrogen, there are many subtleties in the calculations: as the atoms are so light, the ions forming the crystalline lattice have signi cant displacements even when temperatures are very low, and even at T=0 K, due to Heisenberg's uncertainty principle. Thus, the energy corresponding to this zero-point (ZP) motion is signi cant and has to be included in an accurate determination of the most stable phase. This has been done including ZP vibrational energies within the harmonic approximation for a range of pressures and at T=0 K, giving rise to a series of structures that are stable in their respective pressure ranges [8]. Very recently, a treatment of the phases of hydrogen that includes anharmonicity in ZP energies has suggested that relative stability of the phases may change with respect to the calculations within the harmonic approximation [9]. Many of the proposed structures for solid hydrogen have been investigated. Particularly, the Cmca-4 structure, which was found to be the stable one from 385-490 GPa [8], is metallic. Calculations for this structure, within the harmonic approximation for the ionic motion, predict a Tc up to 242 K at 450 GPa [10]. Nonetheless, due to the big ionic displacements, the harmonic approximation may not su ce to describe correctly the system. The aim of this work is to apply a recently developed method to treat anharmonicity, the stochastic self-consistent harmonic approximation (SSCHA) [11], to Cmca-4 metallic hydrogen. This way, we will be able to study the e ects of anharmonicity in the phonon spectrum and to try to understand the changes it may provoque in the value of Tc. The work is structured as follows. First we present the theoretical basis of the calculations: Density Functional Theory (DFT) for the electronic calculations, phonons in the harmonic approximation and the SSCHA. Then we apply these methods to Cmca-4 hydrogen and we discuss the results obtained. In the last chapter we draw some conclusions and propose possible future work.