8 resultados para Speed limits
em CaltechTHESIS
Resumo:
A method is developed to calculate the settling speed of dilute arrays of spheres for the three cases of: I, a random array of freely moving particles; II, a random array of rigidly held particles; and III, a cubic array of particles. The basic idea of the technique is to give a formal representation for the solution and then manipulate this representation in a straightforward manner to obtain the result. For infinite arrays of spheres, our results agree with the results previously found by other authors, and the analysis here appears to be simpler. This method is able to obtain more terms in the answer than was possible by Saffman's unified treatment for point particles. Some results for arbitrary two sphere distributions are presented, and an analysis of the wall effect for particles settling in a tube is given. It is expected that the method presented here can be generalized to solve other types of problems.
Resumo:
In Part I of this thesis, a new magnetic spectrometer experiment which measured the β spectrum of ^(35)S is described. New limits on heavy neutrino emission in nuclear β decay were set, for a heavy neutrino mass range between 12 and 22 keV. In particular, this measurement rejects the hypothesis that a 17 keV neutrino is emitted, with sin^2 θ = 0.0085, at the 6δ statistical level. In addition, an auxiliary experiment was performed, in which an artificial kink was induced in the β spectrum by means of an absorber foil which masked a fraction of the source area. In this measurement, the sensitivity of the magnetic spectrometer to the spectral features of heavy neutrino emission was demonstrated.
In Part II, a measurement of the neutron spallation yield and multiplicity by the Cosmic-ray Underground Background Experiment is described. The production of fast neutrons by muons was investigated at an underground depth of 20 meters water equivalent, with a 200 liter detector filled with 0.09% Gd-loaded liquid scintillator. We measured a neutron production yield of (3.4 ± 0.7) x 10^(-5) neutrons per muon-g/cm^2, in agreement with other experiments. A single-to-double neutron multiplicity ratio of 4:1 was observed. In addition, stopped π^+ decays to µ^+ and then e^+ were observed as was the associated production of pions and neutrons, by the muon spallation interaction. It was seen that practically all of the π^+ produced by muons were also accompanied by at least one neutron. These measurements serve as the basis for neutron background estimates for the San Onofre neutrino detector.
Resumo:
Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.
Resumo:
In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.
Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.
Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.
Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.
Resumo:
Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.
The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.
Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.
In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.
Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.
That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.
The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.
Resumo:
This thesis has two basic themes: the investigation of new experiments which can be used to test relativistic gravity, and the investigation of new technologies and new experimental techniques which can be applied to make gravitational wave astronomy a reality.
Advancing technology will soon make possible a new class of gravitation experiments: pure laboratory experiments with laboratory sources of non-Newtonian gravity and laboratory detectors. The key advance in techno1ogy is the development of resonant sensing systems with very low levels of dissipation. Chapter 1 considers three such systems (torque balances, dielectric monocrystals, and superconducting microwave resonators), and it proposes eight laboratory experiments which use these systems as detectors. For each experiment it describes the dominant sources of noise and the technology required.
The coupled electro-mechanical system consisting of a microwave cavity and its walls can serve as a gravitational radiation detector. A gravitational wave interacts with the walls, and the resulting motion induces transitions from a highly excited cavity mode to a nearly unexcited mode. Chapter 2 describes briefly a formalism for analyzing such a detector, and it proposes a particular design.
The monitoring of a quantum mechanical harmonic oscillator on which a classical force acts is important in a variety of high-precision experiments, such as the attempt to detect gravitational radiation. Chapter 3 reviews the standard techniques for monitoring the oscillator; and it introduces a new technique which, in principle, can determine the details of the force with arbitrary accuracy, despite the quantum properties of the oscillator.
The standard method for monitoring the oscillator is the "amplitude- and-phase" method (position or momentum transducer with output fed through a linear amplifier). The accuracy obtainable by this method is limited by the uncertainty principle. To do better requires a measurement of the type which Braginsky has called "quantum nondemolition." A well-known quantum nondemolition technique is "quantum counting," which can detect an arbitrarily weak force, but which cannot provide good accuracy in determining its precise time-dependence. Chapter 3 considers extensively a new type of quantum nondemolition measurement - a "back-action-evading" measurement of the real part X1 (or the imaginary part X2) of the oscillator's complex amplitude. In principle X1 can be measured arbitrarily quickly and arbitrarily accurately, and a sequence of such measurements can lead to an arbitrarily accurate monitoring of the classical force.
Chapter 3 describes explicit gedanken experiments which demonstrate that X1 can be measured arbitrarily quickly and arbitrarily accurately, it considers approximate back-action-evading measurements, and it develops a theory of quantum nondemolition measurement for arbitrary quantum mechanical systems.
In Rosen's "bimetric" theory of gravity the (local) speed of gravitational radiation vg is determined by the combined effects of cosmological boundary values and nearby concentrations of matter. It is possible for vg to be less than the speed of light. Chapter 4 shows that emission of gravitational radiation prevents particles of nonzero rest mass from exceeding the speed of gravitational radiation. Observations of relativistic particles place limits on vg and the cosmological boundary values today, and observations of synchrotron radiation from compact radio sources place limits on the cosmological boundary values in the past.