7 resultados para Virtual Library

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Present experimental data on nucleon-antinucleon scattering allow a study of the possibility of a phase transition in a nucleon-antinucleon gas at high temperature. Estimates can be made of the general behavior of the elastic phase shifts without resorting to theoretical derivation. A phase transition which separates nucleons from antinucleons is found at about 280 MeV in the approximation of the second virial coefficient to the free energy of the gas.

Part II

The parton model is used to derive scaling laws for the hadrons observed in deep inelastic electron-nucleon scattering which lie in the fragmentation region of the virtual photon. Scaling relations are obtained in the Bjorken and Regge regions. It is proposed that the distribution functions become independent of both q2 and ν where the Bjorken and Regge regions overlap. The quark density functions are discussed in the limit x→1 for the nucleon octet and the pseudoscalar mesons. Under certain plausible assumptions it is found that only one or two quarks of the six types of quarks and antiquarks have an appreciable density function in the limit x→1. This has implications for the quark fragmentation functions near the large momentum boundary of their fragmentation region. These results are used to propose a method of measuring the proton and neutron quark density functions for all x by making measurements on inclusively produced hadrons in electroproduction only. Implications are also discussed for the hadrons produced in electron-positron annihilation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in optical techniques have enabled many breakthroughs in biology and medicine. However, light scattering by biological tissues remains a great obstacle, restricting the use of optical methods to thin ex vivo sections or superficial layers in vivo. In this thesis, we present two related methods that overcome the optical depth limit—digital time reversal of ultrasound encoded light (digital TRUE) and time reversal of variance-encoded light (TROVE). These two techniques share the same principle of using acousto-optic beacons for time reversal optical focusing within highly scattering media, like biological tissues. Ultrasound, unlike light, is not significantly scattered in soft biological tissues, allowing for ultrasound focusing. In addition, a fraction of the scattered optical wavefront that passes through the ultrasound focus gets frequency-shifted via the acousto-optic effect, essentially creating a virtual source of frequency-shifted light within the tissue. The scattered ultrasound-tagged wavefront can be selectively measured outside the tissue and time-reversed to converge at the location of the ultrasound focus, enabling optical focusing within deep tissues. In digital TRUE, we time reverse ultrasound-tagged light with an optoelectronic time reversal device (the digital optical phase conjugate mirror, DOPC). The use of the DOPC enables high optical gain, allowing for high intensity optical focusing and focal fluorescence imaging in thick tissues at a lateral resolution of 36 µm by 52 µm. The resolution of the TRUE approach is fundamentally limited to that of the wavelength of ultrasound. The ultrasound focus (~ tens of microns wide) usually contains hundreds to thousands of optical modes, such that the scattered wavefront measured is a linear combination of the contributions of all these optical modes. In TROVE, we make use of our ability to digitally record, analyze and manipulate the scattered wavefront to demix the contributions of these spatial modes using variance encoding. In essence, we encode each spatial mode inside the scattering sample with a unique variance, allowing us to computationally derive the time reversal wavefront that corresponds to a single optical mode. In doing so, we uncouple the system resolution from the size of the ultrasound focus, demonstrating optical focusing and imaging between highly diffusing samples at an unprecedented, speckle-scale lateral resolution of ~ 5 µm. Our methods open up the possibility of fully exploiting the prowess and versatility of biomedical optics in deep tissues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lipid bilayer membranes are models for cell membranes--the structure that helps regulate cell function. Cell membranes are heterogeneous, and the coupling between composition and shape gives rise to complex behaviors that are important to regulation. This thesis seeks to systematically build and analyze complete models to understand the behavior of multi-component membranes.

We propose a model and use it to derive the equilibrium and stability conditions for a general class of closed multi-component biological membranes. Our analysis shows that the critical modes of these membranes have high frequencies, unlike single-component vesicles, and their stability depends on system size, unlike in systems undergoing spinodal decomposition in flat space. An important implication is that small perturbations may nucleate localized but very large deformations. We compare these results with experimental observations.

We also study open membranes to gain insight into long tubular membranes that arise for example in nerve cells. We derive a complete system of equations for open membranes by using the principle of virtual work. Our linear stability analysis predicts that the tubular membranes tend to have coiling shapes if the tension is small, cylindrical shapes if the tension is moderate, and beading shapes if the tension is large. This is consistent with experimental observations reported in the literature in nerve fibers. Further, we provide numerical solutions to the fully nonlinear equilibrium equations in some problems, and show that the observed mode shapes are consistent with those suggested by linear stability. Our work also proves that beadings of nerve fibers can appear purely as a mechanical response of the membrane.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches.

This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems.

Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired lattice constant. The film is grown strained on an available wafer substrate, but the thickness is below the dislocation nucleation threshold. By removing the film from the growth substrate, allowing the strain to relax elastically, and bonding it to a supportive handle, a template with the desired lattice constant is formed. Experimental efforts towards this structure and initial proof of concept are presented.

Cells with high radiative quality present the opportunity to recover a large amount of their radiative losses if they are incorporated in an ensemble that couples emission from one cell to another. This effect is well known, but has been explored previously in the context of sub cells that independently operate at their maximum power point. This analysis explicitly accounts for the system interaction and identifies ways to enhance overall performance by operating some cells in an ensemble at voltages that reduce the power converted in the individual cell. Series connected multijunctions, which by their nature facilitate strong optical coupling between sub-cells, are reoptimized with substantial performance benefit.

Photovoltaic efficiency is usually measured relative to a standard incident spectrum to allow comparison between systems. Deployed in the field systems may differ in energy production due to sensitivity to changes in the spectrum. The series connection constraint in particular causes system efficiency to decrease as the incident spectrum deviates from the standard spectral composition. This thesis performs a case study comparing performance of systems over a year at a particular location to identify the energy production penalty caused by series connection relative to independent electrical connection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.