17 resultados para space-to-time conversion

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we uncover a new relation which links thermodynamics and information theory. We consider time as a channel and the detailed state of a physical system as a message. As the system evolves with time, ever present noise insures that the "message" is corrupted. Thermodynamic free energy measures the approach of the system toward equilibrium. Information theoretical mutual information measures the loss of memory of initial state. We regard the free energy and the mutual information as operators which map probability distributions over state space to real numbers. In the limit of long times, we show how the free energy operator and the mutual information operator asymptotically attain a very simple relationship to one another. This relationship is founded on the common appearance of entropy in the two operators and on an identity between internal energy and conditional entropy. The use of conditional entropy is what distinguishes our approach from previous efforts to relate thermodynamics and information theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.

In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.

To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.

From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organismal development, homeostasis, and pathology are rooted in inherently probabilistic events. From gene expression to cellular differentiation, rates and likelihoods shape the form and function of biology. Processes ranging from growth to cancer homeostasis to reprogramming of stem cells all require transitions between distinct phenotypic states, and these occur at defined rates. Therefore, measuring the fidelity and dynamics with which such transitions occur is central to understanding natural biological phenomena and is critical for therapeutic interventions.

While these processes may produce robust population-level behaviors, decisions are made by individual cells. In certain circumstances, these minuscule computing units effectively roll dice to determine their fate. And while the 'omics' era has provided vast amounts of data on what these populations are doing en masse, the behaviors of the underlying units of these processes get washed out in averages.

Therefore, in order to understand the behavior of a sample of cells, it is critical to reveal how its underlying components, or mixture of cells in distinct states, each contribute to the overall phenotype. As such, we must first define what states exist in the population, determine what controls the stability of these states, and measure in high dimensionality the dynamics with which these cells transition between states.

To address a specific example of this general problem, we investigate the heterogeneity and dynamics of mouse embryonic stem cells (mESCs). While a number of reports have identified particular genes in ES cells that switch between 'high' and 'low' metastable expression states in culture, it remains unclear how levels of many of these regulators combine to form states in transcriptional space. Using a method called single molecule mRNA fluorescent in situ hybridization (smFISH), we quantitatively measure and fit distributions of core pluripotency regulators in single cells, identifying a wide range of variabilities between genes, but each explained by a simple model of bursty transcription. From this data, we also observed that strongly bimodal genes appear to be co-expressed, effectively limiting the occupancy of transcriptional space to two primary states across genes studied here. However, these states also appear punctuated by the conditional expression of the most highly variable genes, potentially defining smaller substates of pluripotency.

Having defined the transcriptional states, we next asked what might control their stability or persistence. Surprisingly, we found that DNA methylation, a mark normally associated with irreversible developmental progression, was itself differentially regulated between these two primary states. Furthermore, both acute or chronic inhibition of DNA methyltransferase activity led to reduced heterogeneity among the population, suggesting that metastability can be modulated by this strong epigenetic mark.

Finally, because understanding the dynamics of state transitions is fundamental to a variety of biological problems, we sought to develop a high-throughput method for the identification of cellular trajectories without the need for cell-line engineering. We achieved this by combining cell-lineage information gathered from time-lapse microscopy with endpoint smFISH for measurements of final expression states. Applying a simple mathematical framework to these lineage-tree associated expression states enables the inference of dynamic transitions. We apply our novel approach in order to infer temporal sequences of events, quantitative switching rates, and network topology among a set of ESC states.

Taken together, we identify distinct expression states in ES cells, gain fundamental insight into how a strong epigenetic modifier enforces the stability of these states, and develop and apply a new method for the identification of cellular trajectories using scalable in situ readouts of cellular state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Freshwater fish of the genus Apteronotus (family Gymnotidae) generate a weak, high frequency electric field (< 100 mV/cm, 0.5-10 kHz) which permeates their local environment. These nocturnal fish are acutely sensitive to perturbations in their electric field caused by other electric fish, and nearby objects whose impedance is different from the surrounding water. This thesis presents high temporal and spatial resolution maps of the electric potential and field on and near Apteronotus. The fish's electric field is a complicated and highly stable function of space and time. Its characteristics, such as spectral composition, timing, and rate of attenuation, are examined in terms of physical constraints, and their possible functional roles in electroreception.

Temporal jitter of the periodic field is less than 1 µsec. However, electrocyte activity is not globally synchronous along the fish 's electric organ. The propagation of electrocyte activation down the fish's body produces a rotation of the electric field vector in the caudal part of the fish. This may assist the fish in identifying nonsymmetrical objects, and could also confuse electrosensory predators that try to locate Apteronotus by following its fieldlines. The propagation also results in a complex spatiotemporal pattern of the EOD potential near the fish. Visualizing the potential on the same and different fish over timescales of several months suggests that it is stable and could serve as a unique signature for individual fish.

Measurements of the electric field were used to calculate the effects of simple objects on the fish's electric field. The shape of the perturbation or "electric image" on the fish's skin is relatively independent of a simple object's size, conductivity, and rostrocaudal location, and therefore could unambiguously determine object distance. The range of electrolocation may depend on both the size of objects and their rostrocaudal location. Only objects with very large dielectric constants cause appreciable phase shifts, and these are strongly dependent on the water conductivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plate tectonics shapes our dynamic planet through the creation and destruction of lithosphere. This work focuses on increasing our understanding of the processes at convergent and divergent boundaries through geologic and geophysical observations at modern plate boundaries. Recent work had shown that the subducting slab in central Mexico is most likely the flattest on Earth, yet there was no consensus about what caused it to originate. The first chapter of this thesis sets out to systematically test all previously proposed mechanisms for slab flattening on the Mexican case. What we have discovered is that there is only one model for which we can find no contradictory evidence. The lack of applicability of the standard mechanisms used to explain flat subduction in the Mexican example led us to question their applications globally. The second chapter expands the search for a cause of flat subduction, in both space and time. We focus on the historical record of flat slabs in South America and look for a correlation between the shallowing and steepening of slab segments with relation to the inferred thickness of the subducting oceanic crust. Using plate reconstructions and the assumption that a crustal anomaly formed on a spreading ridge will produce two conjugate features, we recreate the history of subduction along the South American margin and find that there is no correlation between the subduction of a bathymetric highs and shallow subduction. These studies have proven that a subducting crustal anomaly is neither a sufficient or necessary condition of flat slab subduction. The final chapter in this thesis looks at the divergent plate boundary in the Gulf of California. Through geologic reconnaissance mapping and an intensive paleomagnetic sampling campaign, we try to constrain the location and orientation of a widespread volcanic marker unit, the Tuff of San Felipe. Although the resolution of the applied magnetic susceptibility technique proved inadequate to contain the direction of the pyroclastic flow with high precision, we have been able to detect the tectonic rotation of coherent blocks as well as rotation within blocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Close to equilibrium, a normal Bose or Fermi fluid can be described by an exact kinetic equation whose kernel is nonlocal in space and time. The general expression derived for the kernel is evaluated to second order in the interparticle potential. The result is a wavevector- and frequency-dependent generalization of the linear Uehling-Uhlenbeck kernel with the Born approximation cross section.

The theory is formulated in terms of second-quantized phase space operators whose equilibrium averages are the n-particle Wigner distribution functions. Convenient expressions for the commutators and anticommutators of the phase space operators are obtained. The two-particle equilibrium distribution function is analyzed in terms of momentum-dependent quantum generalizations of the classical pair distribution function h(k) and direct correlation function c(k). The kinetic equation is presented as the equation of motion of a two -particle correlation function, the phase space density-density anticommutator, and is derived by a formal closure of the quantum BBGKY hierarchy. An alternative derivation using a projection operator is also given. It is shown that the method used for approximating the kernel by a second order expansion preserves all the sum rules to the same order, and that the second-order kernel satisfies the appropriate positivity and symmetry conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The access of 1.2-40 MeV protons and 0.4-1.0 MeV electrons from interplanetary space to the polar cap regions has been investigated with an experiment on board a low altitude, polar orbiting satellite (OG0-4).

A total of 333 quiet time observations of the electron polar cap boundary give a mapping of the boundary between open and closed geomagnetic field lines which is an order of magnitude more comprehensive than previously available.

Persistent features (north/south asymmetries) in the polar cap proton flux, which are established as normal during solar proton events, are shown to be associated with different flux levels on open geomagnetic field lines than on closed field lines. The pole in which these persistent features are observed is strongly correlated to the sector structure of the interplanetary magnetic field and uncorrelated to the north/south component of this field. The features were observed in the north (south) pole during a negative (positive) sector 91% of the time, while the solar field had a southward component only 54% of the time. In addition, changes in the north/south component have no observable effect on the persistent features.

Observations of events associated with co-rotating regions of enhanced proton flux in interplanetary space are used to establish the characteristics of the 1.2 - 40 MeV proton access windows: the access window for low polar latitudes is near the earth, that for one high polar latitude region is ~250 R behind the earth, while that for the other high polar latitude region is ~1750 R behind the earth. All of the access windows are of approximately the same extent (~120 R). The following phenomena contribute to persistent polar cap features: limited interplanetary regions of enhanced flux propagating past the earth, radial gradients in the interplanetary flux, and anisotropies in the interplanetary flux.

These results are compared to the particle access predictions of the distant geomagnetic tail configurations proposed by Michel and Dessler, Dungey, and Frank. The data are consistent with neither the model of Michel and Dessler nor that of Dungey. The model of Frank can yield a consistent access window configuration provided the following constraints are satisfied: the merging rate for open field lines at one polar neutral point must be ~5 times that at the other polar neutral point, related to the solar magnetic field configuration in a consistent fashion, the migration time for open field lines to move across the polar cap region must be the same in both poles, and the open field line merging rate at one of the polar neutral points must be at least as large as that required for almost all the open field lines to have merged in 0 (one hour). The possibility of satisfying these constraints is investigated in some detail.

The role played by interplanetary anisotropies in the observation of persistent polar cap features is discussed. Special emphasis is given to the problem of non-adiabatic particle entry through regions where the magnetic field is changing direction. The degree to which such particle entry can be assumed to be nearly adiabatic is related to the particle rigidity, the angle through which the field turns, and the rate at which the field changes direction; this relationship is established for the case of polar cap observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The field of cavity optomechanics, which concerns the coupling of a mechanical object's motion to the electromagnetic field of a high finesse cavity, allows for exquisitely sensitive measurements of mechanical motion, from large-scale gravitational wave detection to microscale accelerometers. Moreover, it provides a potential means to control and engineer the state of a macroscopic mechanical object at the quantum level, provided one can realize sufficiently strong interaction strengths relative to the ambient thermal noise. Recent experiments utilizing the optomechanical interaction to cool mechanical resonators to their motional quantum ground state allow for a variety of quantum engineering applications, including preparation of non-classical mechanical states and coherent optical to microwave conversion. Optomechanical crystals (OMCs), in which bandgaps for both optical and mechanical waves can be introduced through patterning of a material, provide one particularly attractive means for realizing strong interactions between high-frequency mechanical resonators and near-infrared light. Beyond the usual paradigm of cavity optomechanics involving isolated single mechanical elements, OMCs can also be fashioned into planar circuits for photons and phonons, and arrays of optomechanical elements can be interconnected via optical and acoustic waveguides. Such coupled OMC arrays have been proposed as a way to realize quantum optomechanical memories, nanomechanical circuits for continuous variable quantum information processing and phononic quantum networks, and as a platform for engineering and studying quantum many-body physics of optomechanical meta-materials.

However, while ground state occupancies (that is, average phonon occupancies less than one) have been achieved in OMC cavities utilizing laser cooling techniques, parasitic absorption and the concomitant degradation of the mechanical quality factor fundamentally limit this approach. On the other hand, the high mechanical frequency of these systems allows for the possibility of using a dilution refrigerator to simultaneously achieve low thermal occupancy and long mechanical coherence time by passively cooling the device to the millikelvin regime. This thesis describes efforts to realize the measurement of OMC cavities inside a dilution refrigerator, including the development of fridge-compatible optical coupling schemes and the characterization of the heating dynamics of the mechanical resonator at sub-kelvin temperatures.

We will begin by summarizing the theoretical framework used to describe cavity optomechanical systems, as well as a handful of the quantum applications envisioned for such devices. Then, we will present background on the design of the nanobeam OMC cavities used for this work, along with details of the design and characterization of tapered fiber couplers for optical coupling inside the fridge. Finally, we will present measurements of the devices at fridge base temperatures of Tf = 10 mK, using both heterodyne spectroscopy and time-resolved sideband photon counting, as well as detailed analysis of the prospects for future quantum applications based on the observed optically-induced heating.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the worldwide prevalence of diabetes mellitus continues to increase, diabetic retinopathy remains the leading cause of visual impairment and blindness in many developed countries. Between 32 to 40 percent of about 246 million people with diabetes develop diabetic retinopathy. Approximately 4.1 million American adults 40 years and older are affected by diabetic retinopathy. This glucose-induced microvascular disease progressively damages the tiny blood vessels that nourish the retina, the light-sensitive tissue at the back of the eye, leading to retinal ischemia (i.e., inadequate blood flow), retinal hypoxia (i.e., oxygen deprivation), and retinal nerve cell degeneration or death. It is a most serious sight-threatening complication of diabetes, resulting in significant irreversible vision loss, and even total blindness.

Unfortunately, although current treatments of diabetic retinopathy (i.e., laser therapy, vitrectomy surgery and anti-VEGF therapy) can reduce vision loss, they only slow down but cannot stop the degradation of the retina. Patients require repeated treatment to protect their sight. The current treatments also have significant drawbacks. Laser therapy is focused on preserving the macula, the area of the retina that is responsible for sharp, clear, central vision, by sacrificing the peripheral retina since there is only limited oxygen supply. Therefore, laser therapy results in a constricted peripheral visual field, reduced color vision, delayed dark adaptation, and weakened night vision. Vitrectomy surgery increases the risk of neovascular glaucoma, another devastating ocular disease, characterized by the proliferation of fibrovascular tissue in the anterior chamber angle. Anti-VEGF agents have potential adverse effects, and currently there is insufficient evidence to recommend their routine use.

In this work, for the first time, a paradigm shift in the treatment of diabetic retinopathy is proposed: providing localized, supplemental oxygen to the ischemic tissue via an implantable MEMS device. The retinal architecture (e.g., thickness, cell densities, layered structure, etc.) of the rabbit eye exposed to ischemic hypoxic injuries was well preserved after targeted oxygen delivery to the hypoxic tissue, showing that the use of an external source of oxygen could improve the retinal oxygenation and prevent the progression of the ischemic cascade.

The proposed MEMS device transports oxygen from an oxygen-rich space to the oxygen-deficient vitreous, the gel-like fluid that fills the inside of the eye, and then to the ischemic retina. This oxygen transport process is purely passive and completely driven by the gradient of oxygen partial pressure (pO2). Two types of devices were designed. For the first type, the oxygen-rich space is underneath the conjunctiva, a membrane covering the sclera (white part of the eye), beneath the eyelids and highly permeable to oxygen in the atmosphere when the eye is open. Therefore, sub-conjunctival pO2 is very high during the daytime. For the second type, the oxygen-rich space is inside the device since pure oxygen is needle-injected into the device on a regular basis.

To prevent too fast or too slow permeation of oxygen through the device that is made of parylene and silicone (two widely used biocompatible polymers in medical devices), the material properties of the hybrid parylene/silicone were investigated, including mechanical behaviors, permeation rates, and adhesive forces. Then the thicknesses of parylene and silicone became important design parameters that were fine-tuned to reach the optimal oxygen permeation rate.

The passive MEMS oxygen transporter devices were designed, built, and tested in both bench-top artificial eye models and in-vitro porcine cadaver eyes. The 3D unsteady saccade-induced laminar flow of water inside the eye model was modeled by computational fluid dynamics to study the convective transport of oxygen inside the eye induced by saccade (rapid eye movement). The saccade-enhanced transport effect was also demonstrated experimentally. Acute in-vivo animal experiments were performed in rabbits and dogs to verify the surgical procedure and the device functionality. Various hypotheses were confirmed both experimentally and computationally, suggesting that both the two types of devices are very promising to cure diabetic retinopathy. The chronic implantation of devices in ischemic dog eyes is still underway.

The proposed MEMS oxygen transporter devices can be also applied to treat other ocular and systemic diseases accompanied by retinal ischemia, such as central retinal artery occlusion, carotid artery disease, and some form of glaucoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The early stage of laminar-turbulent transition in a hypervelocity boundary layer is studied using a combination of modal linear stability analysis, transient growth analysis, and direct numerical simulation. Modal stability analysis is used to clarify the behavior of first and second mode instabilities on flat plates and sharp cones for a wide range of high enthalpy flow conditions relevant to experiments in impulse facilities. Vibrational nonequilibrium is included in this analysis, its influence on the stability properties is investigated, and simple models for predicting when it is important are described.

Transient growth analysis is used to determine the optimal initial conditions that lead to the largest possible energy amplification within the flow. Such analysis is performed for both spatially and temporally evolving disturbances. The analysis again targets flows that have large stagnation enthalpy, such as those found in shock tunnels, expansion tubes, and atmospheric flight at high Mach numbers, and clarifies the effects of Mach number and wall temperature on the amplification achieved. Direct comparisons between modal and non-modal growth are made to determine the relative importance of these mechanisms under different flow regimes.

Conventional stability analysis employs the assumption that disturbances evolve with either a fixed frequency (spatial analysis) or a fixed wavenumber (temporal analysis). Direct numerical simulations are employed to relax these assumptions and investigate the downstream propagation of wave packets that are localized in space and time, and hence contain a distribution of frequencies and wavenumbers. Such wave packets are commonly observed in experiments and hence their amplification is highly relevant to boundary layer transition prediction. It is demonstrated that such localized wave packets experience much less growth than is predicted by spatial stability analysis, and therefore it is essential that the bandwidth of localized noise sources that excite the instability be taken into account in making transition estimates. A simple model based on linear stability theory is also developed which yields comparable results with an enormous reduction in computational expense. This enables the amplification of finite-width wave packets to be taken into account in transition prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is in two parts. In the first section, the operator structure of the singular terms in the equal-time commutator of space and time components of the electromagnetic current is investigated in perturbation theory by establishing a connection with Feynman diagrams. It is made very plausible that the singular term is a c number. Some remarks are made about the same problem in the electrodynamics of a spinless particle.

In the second part, an SU(3) symmetric multi-channel calculation of the electromagnetic mass differences in the pseudoscalar meson and baryon octets is carried out with an attempt to include some of the physics of the crossed (pair annihilation) channel along the lines of the recent work by Ball and Zachariasen. The importance of the tensor meson Regge trajectories is emphasized. The agreement with experiment is poor for the isospin one mass differences, but excellent for those with isospin two.