12 resultados para accelerometer, randomness check
em CaltechTHESIS
Resumo:
Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.
The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.
In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.
Resumo:
The model dependence inherent in hadronic calculations is one of the dominant sources of uncertainty in the theoretical prediction of the anomalous magnetic moment of the muon. In this thesis, we focus on the charged pion contribution and turn a critical eye on the models employed in the few previous calculations of $a_\mu^{\pi^+\pi^-}$. Chiral perturbation theory provides a check on these models at low energies, and we therefore calculate the charged pion contribution to light-by-light (LBL) scattering to $\mathcal{O}(p^6)$. We show that the dominant corrections to the leading order (LO) result come from two low energy constants which show up in the form factors for the $\gamma\pi\pi$ and $\gamma\gamma\pi\pi$ vertices. Comparison with the existing models reveal a potentially significant omission - none include the pion polarizability corrections associated with the $\gamma\gamma\pi\pi$ vertex. We next consider alternative models where the pion polarizability is produced through exchange of the $a_1$ axial vector meson. These have poor UV behavior, however, making them unsuited for the $a_\mu^{\pi^+\pi^-}$ calculation. We turn to a simpler form factor modeling approach, generating two distinct models which reproduce the pion polarizability corrections at low energies, have the correct QCD scaling at high energies, and generate finite contributions to $a_\mu^{\pi^+\pi^-}$. With these two models, we calculate the charged pion contribution to the anomalous magnetic moment of the muon, finding values larger than those previously reported: $a_\mu^\mathrm{I} = -1.779(4)\times10^{-10}\,,\,a_\mu^\mathrm{II} = -4.892(3)\times10^{-10}$.
Resumo:
The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
Light has long been used for the precise measurement of moving bodies, but the burgeoning field of optomechanics is concerned with the interaction of light and matter in a regime where the typically weak radiation pressure force of light is able to push back on the moving object. This field began with the realization in the late 1960's that the momentum imparted by a recoiling photon on a mirror would place fundamental limits on the smallest measurable displacement of that mirror. This coupling between the frequency of light and the motion of a mechanical object does much more than simply add noise, however. It has been used to cool objects to their quantum ground state, demonstrate electromagnetically-induced-transparency, and modify the damping and spring constant of the resonator. Amazingly, these radiation pressure effects have now been demonstrated in systems ranging 18 orders of magnitude in mass (kg to fg).
In this work we will focus on three diverse experiments in three different optomechanical devices which span the fields of inertial sensors, closed-loop feedback, and nonlinear dynamics. The mechanical elements presented cover 6 orders of magnitude in mass (ng to fg), but they all employ nano-scale photonic crystals to trap light and resonantly enhance the light-matter interaction. In the first experiment we take advantage of the sub-femtometer displacement resolution of our photonic crystals to demonstrate a sensitive chip-scale optical accelerometer with a kHz-frequency mechanical resonator. This sensor has a noise density of approximately 10 micro-g/rt-Hz over a useable bandwidth of approximately 20 kHz and we demonstrate at least 50 dB of linear dynamic sensor range. We also discuss methods to further improve performance of this device by a factor of 10.
In the second experiment, we used a closed-loop measurement and feedback system to damp and cool a room-temperature MHz-frequency mechanical oscillator from a phonon occupation of 6.5 million down to just 66. At the time of the experiment, this represented a world-record result for the laser cooling of a macroscopic mechanical element without the aid of cryogenic pre-cooling. Furthermore, this closed-loop damping yields a high-resolution force sensor with a practical bandwidth of 200 kHZ and the method has applications to other optomechanical sensors.
The final experiment contains results from a GHz-frequency mechanical resonator in a regime where the nonlinearity of the radiation-pressure interaction dominates the system dynamics. In this device we show self-oscillations of the mechanical element that are driven by multi-photon-phonon scattering. Control of the system allows us to initialize the mechanical oscillator into a stable high-amplitude attractor which would otherwise be inaccessible. To provide context, we begin this work by first presenting an intuitive overview of optomechanical systems and then providing an extended discussion of the principles underlying the design and fabrication of our optomechanical devices.
Resumo:
Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we
•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;
•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;
•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;
•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;
•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.
The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we
•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;
•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.
The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.
Resumo:
A dilution refrigerator has been constructed capable of producing steady state temperatures less than .075°K. The first part of this work is concerned with the design and construction of this machine. Enough theory is presented to allow one to understand the operation and critical design factors of a dilution refrigerator. The performance of our refrigerator is compared with the operating characteristics of three other dilution refrigerators appearing in the present literature.
The dilution refrigerator constructed was used to measure the nuclear contribution to the low temperature specific heat of a pure, single-crystalline sample of rhenium metal. Measurements were made in magnetic fields from 0 to 12.5 kOe for the temperature range .13°K - .52°K. The second part of this work discusses the results of these experiments. The expected nuclear contribution is not found when the sample is in the superconducting state. This is believed to be due to the long spin-lattice relaxation times in superconductors. In the normal state, for the temperature range studied, the nuclear contribution is given by A/T2 where A = .061 ± .002 millijoules-K/mole. The value of A is found to increase to A = .077 ± .004 millijoules-K/mole when the sample is located in a magnetic field of 12.5 kOe.
From the measured value of A the splitting of the energy levels of the nuclear spin system due to the interaction of the internal crystalline electric field gradients with the nuclear quadrupole moments is calculated. A comparison is made between the predicted and measured magnetic dependence of the specific heat. Finally, predictions are made of future nuclear magnetic resonance experiments which may be performed to check the results obtained by calorimetery here and further, to investigate existing theories concerning the sources of electric field gradients in metals.
Resumo:
An exact solution to the monoenergetic Boltzmann equation is obtained for the case of a plane isotropic burst of neutrons introduced at the interface separating two adjacent, dissimilar, semi-infinite media. The method of solution used is to remove the time dependence by a Laplace transformation, solve the transformed equation by the normal mode expansion method, and then invert to recover the time dependence.
The general result is expressed as a sum of definite, multiple integrals, one of which contains the uncollided wave of neutrons originating at the source plane. It is possible to obtain a simplified form for the solution at the interface, and certain numerical calculations are made there.
The interface flux in two adjacent moderators is calculated and plotted as a function of time for several moderator materials. For each case it is found that the flux decay curve has an asymptotic slope given accurately by diffusion theory. Furthermore, the interface current is observed to change directions when the scattering and absorption cross sections of the two moderator materials are related in a certain manner. More specifically, the reflection process in two adjacent moderators appears to depend initially on the scattering properties and for long times on the absorption properties of the media.
This analysis contains both the single infinite and semi-infinite medium problems as special cases. The results in these two special cases provide a check on the accuracy of the general solution since they agree with solutions of these problems obtained by separate analyses.
Resumo:
Experimental and theoretical studies have been made of the electrothermal waves occurring in a nonequilibrium MHD plasma. These waves are caused by an instability that occurs when a plasma having a dependence of conductivity on current density is subjected to crossed electric and magnetic fields. Theoretically, these waves were studied by developing and solving the equations of a steady, one-dimensional nonuniformity in electron density. From these nonlinear equations, predictions of the maximum amplitude and of the half width of steady waves could be obtained. Experimentally, the waves were studied in a nonequilibrium discharge produced in a potassium-seeded argon plasma at 2000°K and 1 atm. pressure. The behavior of such a discharge with four different configurations of electrodes was determined from photographs, photomultiplier measurements, and voltage probes. These four configurations were chosen to produce steady waves, to check the stability of steady waves, and to observe the manifestation of the waves in a MHD generator or accelerator configuration.
Steady, one-dimensional waves were found to exist in a number of situations, and where they existed, their characteristics agreed with the predictions of the steady theory. Some extensions of this theory were necessary, however, to describe the transient phenomena occurring in the inlet region of a discharge transverse to the gas flow. It was also found that in a discharge away from the stabilizing effect of the electrodes, steady waves became unstable for large Hall parameters. Methods of prediction of the effective electrical conductivity and Hall parameter of a plasma with nonuniformities caused by the electrothermal waves were also studied. Using these methods and the values of amplitude predicted by the steady theory, it was found that the measured decrease in transverse conductivity of a MHD device, 50 per cent at a Hall parameter of 5, could be accounted for in terms of the electrothermal instability.
Resumo:
Theoretical and experimental studies were conducted to investigate the wave induced oscillations in an arbitrary shaped harbor with constant depth which is connected to the open-sea.
A theory termed the “arbitrary shaped harbor” theory is developed. The solution of the Helmholtz equation, ∇2f + k2f = 0, is formulated as an integral equation; an approximate method is employed to solve the integral equation by converting it to a matrix equation. The final solution is obtained by equating, at the harbor entrance, the wave amplitude and its normal derivative obtained from the solutions for the regions outside and inside the harbor.
Two special theories called the circular harbor theory and the rectangular harbor theory are also developed. The coordinates inside a circular and a rectangular harbor are separable; therefore, the solution for the region inside these harbors is obtained by the method of separation of variables. For the solution in the open-sea region, the same method is used as that employed for the arbitrary shaped harbor theory. The final solution is also obtained by a matching procedure similar to that used for the arbitrary shaped harbor theory. These two special theories provide a useful analytical check on the arbitrary shaped harbor theory.
Experiments were conducted to verify the theories in a wave basin 15 ft wide by 31 ft long with an effective system of wave energy dissipators mounted along the boundary to simulate the open-sea condition.
Four harbors were investigated theoretically and experimentally: circular harbors with a 10° opening and a 60° opening, a rectangular harbor, and a model of the East and West Basins of Long Beach Harbor located in Long Beach, California.
Theoretical solutions for these four harbors using the arbitrary shaped harbor theory were obtained. In addition, the theoretical solutions for the circular harbors and the rectangular harbor using the two special theories were also obtained. In each case, the theories have proven to agree well with the experimental data.
It is found that: (1) the resonant frequencies for a specific harbor are predicted correctly by the theory, although the amplification factors at resonance are somewhat larger than those found experimentally,(2) for the circular harbors, as the width of the harbor entrance increases, the amplification at resonance decreases, but the wave number bandwidth at resonance increases, (3) each peak in the curve of entrance velocity vs incident wave period corresponds to a distinct mode of resonant oscillation inside the harbor, thus the velocity at the harbor entrance appears to be a good indicator for resonance in harbors of complicated shape, (4) the results show that the present theory can be applied with confidence to prototype harbors with relatively uniform depth and reflective interior boundaries.
Resumo:
The optomechanical interaction is an extremely powerful tool with which to measure mechanical motion. The displacement resolution of chip-scale optomechanical systems has been measured on the order of 1⁄10th of a proton radius. So strong is this optomechanical interaction that it has recently been used to remove almost all thermal noise from a mechanical resonator and observe its quantum ground-state of motion starting from cryogenic temperatures.
In this work, chapter 1 describes the basic physics of the canonical optomechanical system, optical measurement techniques, and how the optomechanical interaction affects the coupled mechanical resonator. In chapter 2, we describe our techniques for realizing this canonical optomechanical system in a chip-scale form factor.
In chapter 3, we describe an experiment where we used radiation pressure feedback to cool a mesoscopic mechanical resonator near its quantum ground-state from room-temperature. We cooled the resonator from a room temperature phonon occupation of <n> = 6.5 million to an occupation of <n> = 66, which means the resonator is in its ground state approximately 2% of the time, while being coupled to a room-temperature thermal environment. At the time of this work, this is the closest a mesoscopic mechanical resonator has been to its ground-state of motion at room temperature, and this work begins to open the door to room-temperature quantum control of mechanical objects.
Chapter 4 begins with the realization that the displacement resolutions achieved by optomechanical systems can surpass those of conventional MEMS sensors by an order of magnitude or more. This provides the motivation to develop and calibrate an optomechanical accelerometer with a resolution of approximately 10 micro-g/rt-Hz over a bandwidth of approximately 30 kHz. In chapter 5, we improve upon the performance and practicality of this sensor by greatly increasing the test mass size, investigating and reducing low-frequency noise, and incorporating more robust optical coupling techniques and capacitive wavelength tuning. Finally, in chapter 6 we present our progress towards developing another optomechanical inertial sensor - a gyroscope.
Resumo:
The induced magnetic uniaxial anisotropy of Ni-Fe alloy films has been shown to be related to the crystal structure of the film. By use of electron diffraction, the crystal structure or vacuum-deposited films was determined over the composition range 5% to 85% Ni, with substrate temperature during deposition at various temperatures in the range 25° to 500° C. The phase diagram determined in this way has boundaries which are in fair agreement with the equilibrium boundaries for bulk material above 400°C. The (α+ ɤ) mixture phase disappears below 100°C.
The measurement of uniaxial anisotropy field for 25% Ni-Fe alloy films deposited at temperatures in the range -80°C to 375°C has been carried out. Comparison of the crystal structure phase diagram with the present data and those published by Wilts indicates that the anisotropy is strongly sensitive to crystal structure. Others have proposed pair ordering as an important source of anisotropy because of an apparent peak in the anisotropy energy at about 50% Ni composition. The present work shows no such peak, and leads to the conclusion that pair ordering cannot be a dominant contributor.
Width of the 180° domain wall in 76% Ni-Fe alloy films as a function of film thickness up to 1800 Å was measured using the defocused mode of Lorentz microscopy. For the thinner films, the measured wall widths are in good agreement with earlier data obtained by Fuchs. For films thicker than 800 Å, the wall width increases with film thickness to about 9000 Å at 1800 Å film thickness. Similar measurements for polycrystalline Co films with thickness from 200 to 1500 Å have been made. The wall width increases from 3000 Å at 400 Å film thickness to about 6000 Å at 1500 Å film thickness. The wall widths for Ni-Fe and Co films are much greater than predicted by present theories. The validity of the classical determination of wall width is discussed, and the comparison of the present data with theoretical results is given.
Finally, an experimental study of ripple by Lorentz microscopy in Ni-Fe alloy films has been carried out. The following should be noted: (1) the only practical way to determine experimentally a meaningful wavelength is to find a well-defined ripple periodicity by visual inspection of a photomicrograph. (2) The average wavelength is of the order of 1µ. This value is in reasonable agreement with the main wavelength predicted by the theories developed by others. The dependence of wavelength on substrate deposition temperature, alloy composition and the external magnetic field has been also studied and the results are compared with theoretical predictions. (3) The experimental fact that the ripple structure could not be observed in completely epitaxial films gives confirmation that the ripple results from the randomness of crystallite orientation. Furthermore, the experimental observation that the ripple disappeared in the range 71 and 75% Ni supports the theory that the ripple amplitude is directly dependent on the crystalline anisotropy. An attempt to experimentally determine the order of magnitude of the ripple angle was carried out. The measured angle was about 0.02 rad. The discrepancy between the experimental data and the theoretical prediction is serious. The accurate experimental determination of ripple angle is an unsolved problem.