23 resultados para high-order upwind schemes

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a new approach for the numerical solution of three-dimensional problems in elastodynamics. The new methodology, which is based on a recently introduced Fourier continuation (FC) algorithm for the solution of Partial Differential Equations on the basis of accurate Fourier expansions of possibly non-periodic functions, enables fast, high-order solutions of the time-dependent elastic wave equation in a nearly dispersionless manner, and it requires use of CFL constraints that scale only linearly with spatial discretizations. A new FC operator is introduced to treat Neumann and traction boundary conditions, and a block-decomposed (sub-patch) overset strategy is presented for implementation of general, complex geometries in distributed-memory parallel computing environments. Our treatment of the elastic wave equation, which is formulated as a complex system of variable-coefficient PDEs that includes possibly heterogeneous and spatially varying material constants, represents the first fully-realized three-dimensional extension of FC-based solvers to date. Challenges for three-dimensional elastodynamics simulations such as treatment of corners and edges in three-dimensional geometries, the existence of variable coefficients arising from physical configurations and/or use of curvilinear coordinate systems and treatment of boundary conditions, are all addressed. The broad applicability of our new FC elasticity solver is demonstrated through application to realistic problems concerning seismic wave motion on three-dimensional topographies as well as applications to non-destructive evaluation where, for the first time, we present three-dimensional simulations for comparison to experimental studies of guided-wave scattering by through-thickness holes in thin plates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present measurements of the spatial distribution, kinematics, and physical properties of gas in the circumgalactic medium (CGM) of 2.0<z<2.8 UV color-selected galaxies as well as within the 2<z<3 intergalactic medium (IGM). These measurements are derived from Voigt profile decomposition of the full Lyα and Lyβ forest in 15 high-resolution, high signal-to-noise ratio QSO spectra resulting in a catalog of ∼6000 HI absorbers.

Chapter 2 of this thesis focuses on HI surrounding high-z star-forming galaxies drawn from the Keck Baryonic Structure Survey (KBSS). The KBSS is a unique spectroscopic survey of the distant universe designed to explore the details of the connection between galaxies and intergalactic baryons within the same survey volumes. The KBSS combines high-quality background QSO spectroscopy with large densely-sampled galaxy redshift surveys to probe the CGM at scales of ∼50 kpc to a few Mpc. Based on these data, Chapter 2 presents the first quantitative measurements of the distribution, column density, kinematics, and absorber line widths of neutral hydrogen surrounding high-z star-forming galaxies.

Chapter 3 focuses on the thermal properties of the diffuse IGM. This analysis relies on measurements of the ∼6000 absorber line widths to constrain the thermal and turbulent velocities of absorbing "clouds." A positive correlation between the column density of HI and the minimum line width is recovered and implies a temperature-density relation within the low-density IGM for which higher-density regions are hotter, as is predicted by simple theoretical arguments.

Chapter 4 presents new measurements of the opacity of the IGM and CGM to hydrogen-ionizing photons. The chapter begins with a revised measurement of the HI column density distribution based on this new absorption line catalog that, due to the inclusion of high-order Lyman lines, provides the first statistically robust measurement of the frequency of absorbers with HI column densities 14 ≲ log(NHI/cm-2) ≲ 17.2. Also presented are the first measurements of the column density distribution of HI within the CGM (50 <d < 300 pkpc) of high-z galaxies. These distributions are used to calculate the total opacity of the IGM and IGM+CGM and to revise previous measurements of the mean free path of hydrogen-ionizing photons within the IGM. This chapter also considers the effect of the surrounding CGM on the transmission of ionizing photons out of the sites of active star-formation and into the IGM.

This thesis concludes with a brief discussion of work in progress focused on understanding the distribution of metals within the CGM of KBSS galaxies. Appendix B discusses my contributions to the MOSFIRE instrumentation project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Shockwave lithotripsy is a noninvasive medical procedure wherein shockwaves are repeatedly focused at the location of kidney stones in order to pulverize them. Stone comminution is thought to be the product of two mechanisms: the propagation of stress waves within the stone and cavitation erosion. However, the latter mechanism has also been implicated in vascular injury. In the present work, shock-induced bubble collapse is studied in order to understand the role that it might play in inducing vascular injury. A high-order accurate, shock- and interface-capturing numerical scheme is developed to simulate the three-dimensional collapse of the bubble in both the free-field and inside a vessel phantom. The primary contributions of the numerical study are the characterization of the shock-bubble and shock-bubble-vessel interactions across a large parameter space that includes clinical shockwave lithotripsy pressure amplitudes, problem geometry and tissue viscoelasticity, and the subsequent correlation of these interactions to vascular injury. Specifically, measurements of the vessel wall pressures and displacements, as well as the finite strains in the fluid surrounding the bubble, are utilized with available experiments in tissue to evaluate damage potential. Estimates are made of the smallest injurious bubbles in the microvasculature during both the collapse and jetting phases of the bubble's life cycle. The present results suggest that bubbles larger than 1 μm in diameter could rupture blood vessels under clinical SWL conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The buckling of axially compressed cylindrical shells and externally pressurized spherical shells is extremely sensitive to even very small geometric imperfections. In practice this issue is addressed by either using overly conservative knockdown factors, while keeping perfect axial or spherical symmetry, or adding closely and equally spaced stiffeners on shell surface. The influence of imperfection-sensitivity is mitigated, but the shells designed from these approaches are either too heavy or very expensive and are still sensitive to imperfections. Despite their drawbacks, these approaches have been used for more than half a century.

This thesis proposes a novel method to design imperfection-insensitive cylindrical shells subject to axial compression. Instead of following the classical paths, focused on axially symmetric or high-order rotationally symmetric cross-sections, the method in this thesis adopts optimal symmetry-breaking wavy cross-sections (wavy shells). The avoidance of imperfection sensitivity is achieved by searching with an evolutionary algorithm for smooth cross-sectional shapes that maximize the minimum among the buckling loads of geometrically perfect and imperfect wavy shells. It is found that the shells designed through this approach can achieve higher critical stresses and knockdown factors than any previously known monocoque cylindrical shells. It is also found that these shells have superior mass efficiency to almost all previously reported stiffened shells.

Experimental studies on a design of composite wavy shell obtained through the proposed method are presented in this thesis. A method of making composite wavy shells and a photogrametry technique of measuring full-field geometric imperfections have been developed. Numerical predictions based on the measured geometric imperfections match remarkably well with the experiments. Experimental results confirm that the wavy shells are not sensitive to imperfections and can carry axial compression with superior mass efficiency.

An efficient computational method for the buckling analysis of corrugated and stiffened cylindrical shells subject to axial compression has been developed in this thesis. This method modifies the traditional Bloch wave method based on the stiffness matrix method of rotationally periodic structures. A highly efficient algorithm has been developed to implement the modified Bloch wave method. This method is applied in buckling analyses of a series of corrugated composite cylindrical shells and a large-scale orthogonally stiffened aluminum cylindrical shell. Numerical examples show that the modified Bloch wave method can achieve very high accuracy and require much less computational time than linear and nonlinear analyses of detailed full finite element models.

This thesis presents parametric studies on a series of externally pressurized pseudo-spherical shells, i.e., polyhedral shells, including icosahedron, geodesic shells, and triambic icosahedra. Several optimization methods have been developed to further improve the performance of pseudo-spherical shells under external pressure. It has been shown that the buckling pressures of the shell designs obtained from the optimizations are much higher than the spherical shells and not sensitive to imperfections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents investigations in four areas of theoretical astrophysics: the production of sterile neutrino dark matter in the early Universe, the evolution of small-scale baryon perturbations during the epoch of cosmological recombination, the effect of primordial magnetic fields on the redshifted 21-cm emission from the pre-reionization era, and the nonlinear stability of tidally deformed neutron stars.

In the first part of the thesis, we study the asymmetry-driven resonant production of 7 keV-scale sterile neutrino dark matter in the primordial Universe at temperatures T >~ 100 MeV. We report final DM phase space densities that are robust to uncertainties in the nature of the quark-hadron transition. We give transfer functions for cosmological density fluctuations that are useful for N-body simulations. We also provide a public code for the production calculation.

In the second part of the thesis, we study the instability of small-scale baryon pressure sound waves during cosmological recombination. We show that for relevant wavenumbers, inhomogenous recombination is driven by the transport of ionizing continuum and Lyman-alpha photons. We find a maximum growth factor less than ≈ 1.2 in 107 random realizations of initial conditions. The low growth factors are due to the relatively short duration of the recombination epoch.

In the third part of the thesis, we propose a method of measuring weak magnetic fields, of order 10-19 G (or 10-21 G if scaled to the present day), with large coherence lengths in the inter galactic medium prior to and during the epoch of cosmic reionization. The method utilizes the Larmor precession of spin-polarized neutral hydrogen in the triplet state of the hyperfine transition. We perform detailed calculations of the microphysics behind this effect, and take into account all the processes that affect the hyperfine transition, including radiative decays, collisions, and optical pumping by Lyman-alpha photons.

In the final part of the thesis, we study the non-linear effects of tidal deformations of neutron stars (NS) in a compact binary. We compute the largest three- and four-mode couplings among the tidal mode and high-order p- and g-modes of similar radial wavenumber. We demonstrate the near-exact cancellation of their effects, and resolve the question of the stability of the tidally deformed NS to leading order. This result is significant for the extraction of binary parameters from gravitational wave observations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the advent of the laser in the year 1960, the field of optics experienced a renaissance from what was considered to be a dull, solved subject to an active area of development, with applications and discoveries which are yet to be exhausted 55 years later. Light is now nearly ubiquitous not only in cutting-edge research in physics, chemistry, and biology, but also in modern technology and infrastructure. One quality of light, that of the imparted radiation pressure force upon reflection from an object, has attracted intense interest from researchers seeking to precisely monitor and control the motional degrees of freedom of an object using light. These optomechanical interactions have inspired myriad proposals, ranging from quantum memories and transducers in quantum information networks to precision metrology of classical forces. Alongside advances in micro- and nano-fabrication, the burgeoning field of optomechanics has yielded a class of highly engineered systems designed to produce strong interactions between light and motion.

Optomechanical crystals are one such system in which the patterning of periodic holes in thin dielectric films traps both light and sound waves to a micro-scale volume. These devices feature strong radiation pressure coupling between high-quality optical cavity modes and internal nanomechanical resonances. Whether for applications in the quantum or classical domain, the utility of optomechanical crystals hinges on the degree to which light radiating from the device, having interacted with mechanical motion, can be collected and detected in an experimental apparatus consisting of conventional optical components such as lenses and optical fibers. While several efficient methods of optical coupling exist to meet this task, most are unsuitable for the cryogenic or vacuum integration required for many applications. The first portion of this dissertation will detail the development of robust and efficient methods of optically coupling optomechanical resonators to optical fibers, with an emphasis on fabrication processes and optical characterization.

I will then proceed to describe a few experiments enabled by the fiber couplers. The first studies the performance of an optomechanical resonator as a precise sensor for continuous position measurement. The sensitivity of the measurement, limited by the detection efficiency of intracavity photons, is compared to the standard quantum limit imposed by the quantum properties of the laser probe light. The added noise of the measurement is seen to fall within a factor of 3 of the standard quantum limit, representing an order of magnitude improvement over previous experiments utilizing optomechanical crystals, and matching the performance of similar measurements in the microwave domain.

The next experiment uses single photon counting to detect individual phonon emission and absorption events within the nanomechanical oscillator. The scattering of laser light from mechanical motion produces correlated photon-phonon pairs, and detection of the emitted photon corresponds to an effective phonon counting scheme. In the process of scattering, the coherence properties of the mechanical oscillation are mapped onto the reflected light. Intensity interferometry of the reflected light then allows measurement of the temporal coherence of the acoustic field. These correlations are measured for a range of experimental conditions, including the optomechanical amplification of the mechanics to a self-oscillation regime, and comparisons are drawn to a laser system for phonons. Finally, prospects for using phonon counting and intensity interferometry to produce non-classical mechanical states are detailed following recent proposals in literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we

•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;

•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;

•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;

•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;

•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.

The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we

•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;

•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.

The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the design, construction and performance of a high-pressure, xenon, gas time projection chamber (TPC) for the study of double beta decay in ^(136) Xe. The TPC when operating at 5 atm can accommodate 28 moles of 60% enriched ^(136) Xe. The TPC has operated as a detector at Caltech since 1986. It is capable of reconstructing a charged particle trajectory and can easily distinguish between different kinds of charged particles. A gas purification and xenon gas recovery system were developed. The electronics for the 338 channels of readout was developed along with a data acquistion system. Currently, the detector is being prepared at the University of Neuchatel for installation in the low background laboratory situated in the St. Gotthard tunnel, Switzerland. In one year of runtime the detector should be sensitive to a 0ν lifetime of the order of 10^(24) y, which corresponds to a neutrino mass in the range 0.3 to 3.3 eV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation consists of two parts. The first part presents an explicit procedure for applying multi-Regge theory to production processes. As an illustrative example, the case of three body final states is developed in detail, both with respect to kinematics and multi-Regge dynamics. Next, the experimental consistency of the multi-Regge hypothesis is tested in a specific high energy reaction; the hypothesis is shown to provide a good qualitative fit to the data. In addition, the results demonstrate a severe suppression of double Pomeranchon exchange, and show the coupling of two "Reggeons" to an external particle to be strongly damped as the particle's mass increases. Finally, with the use of two body Regge parameters, order of magnitude estimates of the multi-Regge cross section for various reactions are given.

The second part presents a diffraction model for high energy proton-proton scattering. This model developed by Chou and Yang assumes high energy elastic scattering results from absorption of the incident wave into the many available inelastic channels, with the absorption proportional to the amount of interpenetrating hadronic matter. The assumption that the hadronic matter distribution is proportional to the charge distribution relates the scattering amplitude for pp scattering to the proton form factor. The Chou-Yang model with the empirical proton form factor as input is then applied to calculate a high energy, fixed momentum transfer limit for the scattering cross section, This limiting cross section exhibits the same "dip" or "break" structure indicated in present experiments, but falls significantly below them in magnitude. Finally, possible spin dependence is introduced through a weak spin-orbit type term which gives rather good agreement with pp polarization data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.

The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.

Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.

In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundamental studies of magnetic alignment of highly anisotropic mesostructures can enable the clean-room-free fabrication of flexible, array-based solar and electronic devices, in which preferential orientation of nano- or microwire-type objects is desired. In this study, ensembles of 100 micron long Si microwires with ferromagnetic Ni and Co coatings are oriented vertically in the presence of magnetic fields. The degree of vertical alignment and threshold field strength depend on geometric factors, such as microwire length and ferromagnetic coating thickness, as well as interfacial interactions, which are modulated by varying solvent and substrate surface chemistry. Microwire ensembles with vertical alignment over 97% within 10 degrees of normal, as measured by X-ray diffraction, are achieved over square cm scale areas and set into flexible polymer films. A force balance model has been developed as a predictive tool for magnetic alignment, incorporating magnetic torque and empirically derived surface adhesion parameters. As supported by these calculations, microwires are shown to detach from the surface and align vertically in the presence of magnetic fields on the order of 100 gauss. Microwires aligned in this manner are set into a polydimethylsiloxane film where they retain their vertical alignment after the field has been removed and can subsequently be used as a flexible solar absorber layer. Finally, these microwires arrays can be protected for use in electrochemical cells by the conformal deposition of a graphene layer.