8 resultados para proposed. budget
em CaltechTHESIS
Resumo:
In this thesis I investigate some aspects of the thermal budget of pahoehoe lava flows. This is done with a combination of general field observations, quantitative modeling, and specific field experiments. The results of this work apply to pahoehoe flows in general, even though the vast bulk of the work has been conducted on the lavas formed by the Pu'u 'O'o - Kupaianaha eruption of Kilauea Volcano on Hawai'i. The field observations rely heavily on discussions with the staff of the United States Geological Survey's Hawaiian Volcano Observatory (HVO), under whom I labored repeatedly in 1991-1993 for a period totaling about 10 months.
The quantitative models I have constructed are based on the physical processes observed by others and myself to be active on pahoehoe lava flows. By building up these models from the basic physical principles involved, this work avoids many of the pitfalls of earlier attempts to fit field observations with "intuitively appropriate" mathematical expressions. Unlike many earlier works, my model results can be analyzed in terms of the interactions between the different physical processes. I constructed models to: (1) describe the initial cooling of small pahoehoe flow lobes and (2) understand the thermal budget of lava tubes.
The field experiments were designed either to validate model results or to constrain key input parameters. In support of the cooling model for pahoehoe flow lobes, attempts were made to measure: (1) the cooling within the flow lobes, (2) the amount of heat transported away from the lava by wind, and (3) the growth of the crust on the lobes. Field data collected by Jones [1992], Hon et al. [1994b], and Denlinger [Keszthelyi and Denlinger, in prep.] were also particularly useful in constraining my cooling model for flow lobes. Most of the field observations I have used to constrain the thermal budget of lava tubes were collected by HVO (geological and geophysical monitoring) and the Jet Propulsion Laboratory (airborne infrared imagery [Realmuto et al., 1992]). I was able to assist HVO for part of their lava tube monitoring program and also to collect helicopterborne and ground-based IR video in collaboration with JPL [Keszthelyi et al., 1993].
The most significant results of this work are (1) the quantitative demonstration that the emplacement of pahoehoe and 'a'a flows are the fundamentally different, (2) confirmation that even the longest lava flows observed in our Solar System could have formed as low effusion rate, tube-fed pahoehoe flows, and (3) the recognition that the atmosphere plays a very important role throughout the cooling of history of pahoehoe lava flows. In addition to answering specific questions about the thermal budget of tube-fed pahoehoe lava flows, this thesis has led to some additional, more general, insights into the emplacement of these lava flows. This general understanding of the tube-fed pahoehoe lava flow as a system has suggested foci for future research in this part of physical volcanology.
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.
Resumo:
Part I
Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.
Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.
The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.
Part II
Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.
Resumo:
The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.
First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.
Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.
Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.
Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.
Resumo:
Progress is made on the numerical modeling of both laminar and turbulent non-premixed flames. Instead of solving the transport equations for the numerous species involved in the combustion process, the present study proposes reduced-order combustion models based on local flame structures.
For laminar non-premixed flames, curvature and multi-dimensional diffusion effects are found critical for the accurate prediction of sooting tendencies. A new numerical model based on modified flamelet equations is proposed. Sooting tendencies are calculated numerically using the proposed model for a wide range of species. These first numerically-computed sooting tendencies are in good agreement with experimental data. To further quantify curvature and multi-dimensional effects, a general flamelet formulation is derived mathematically. A budget analysis of the general flamelet equations is performed on an axisymmetric laminar diffusion flame. A new chemistry tabulation method based on the general flamelet formulation is proposed. This new tabulation method is applied to the same flame and demonstrates significant improvement compared to previous techniques.
For turbulent non-premixed flames, a new model to account for chemistry-turbulence interactions is proposed. %It is found that these interactions are not important for radicals and small species, but substantial for aromatic species. The validity of various existing flamelet-based chemistry tabulation methods is examined, and a new linear relaxation model is proposed for aromatic species. The proposed relaxation model is validated against full chemistry calculations. To further quantify the importance of aromatic chemistry-turbulence interactions, Large-Eddy Simulations (LES) have been performed on a turbulent sooting jet flame. %The aforementioned relaxation model is used to provide closure for the chemical source terms of transported aromatic species. The effects of turbulent unsteadiness on soot are highlighted by comparing the LES results with a separate LES using fully-tabulated chemistry. It is shown that turbulent unsteady effects are of critical importance for the accurate prediction of not only the inception locations, but also the magnitude and fluctuations of soot.
Resumo:
Politically the Colorado river is an interstate as well as an international stream. Physically the basin divides itself distinctly into three sections. The upper section from head waters to the mouth of San Juan comprises about 40 percent of the total of the basin and affords about 87 percent of the total runoff, or an average of about 15 000 000 acre feet per annum. High mountains and cold weather are found in this section. The middle section from the mouth of San Juan to the mouth of the Williams comprises about 35 percent of the total area of the basin and supplies about 7 percent of the annual runoff. Narrow canyons and mild weather prevail in this section. The lower third of the basin is composed of mainly hot arid plains of low altitude. It comprises some 25 percent of the total area of the basin and furnishes about 6 percent of the average annual runoff.
The proposed Diamond Creek reservoir is located in the middle section and is wholly within the boundary of Arizona. The site is at the mouth of Diamond Creek and is only 16 m. from Beach Spring, a station on the Santa Fe railroad. It is solely a power project with a limited storage capacity. The dam which creats the reservoir is of the gravity type to be constructed across the river. The walls and foundation are of granite. For a dam of 290 feet in height, the back water will be about 25 m. up the river.
The power house will be placed right below the dam perpendicular to the axis of the river. It is entirely a concrete structure. The power installation would consist of eighteen 37 500 H.P. vertical, variable head turbines, directly connected to 28 000 kwa. 110 000 v. 3 phase, 60 cycle generators with necessary switching and auxiliary apparatus. Each unit is to be fed by a separate penstock wholly embedded into the masonry.
Concerning the power market, the main electric transmission lines would extend to Prescott, Phoenix, Mesa, Florence etc. The mining regions of the mountains of Arizona would be the most adequate market. The demand of power in the above named places might not be large at present. It will, from the observation of the writer, rapidly increase with the wonderful advancement of all kinds of industrial development.
All these things being comparatively feasible, there is one difficult problem: that is the silt. At the Diamond Creek dam site the average annual silt discharge is about 82 650 acre feet. The geographical conditions, however, will not permit silt deposites right in the reservoir. So this design will be made under the assumption given in Section 4.
The silt condition and the change of lower course of the Colorado are much like those of the Yellow River in China. But one thing is different. On the Colorado most of the canyon walls are of granite, while those on the Yellow are of alluvial loess: so it is very hard, if not impossible, to get a favorable dam site on the lower part. As a visitor to this country, I should like to see the full development of the Colorado: but how about THE YELLOW!
Resumo:
The Los Angeles Harbor at San Pedro with its natural advantages, and the big development of these now underway, will very soon be the key to the traffic routes of Southern California. The Atchison, Topeka, and Santa Fe railway company realizing this and, not wishing to be caught asleep, has planned to build a line from El Segundo to the harbor. The developments of the harbor are not the only developments taking place in these localities and the proposed new line is intended to include these also.
Resumo:
The hydroxyketone C-3, an intermediate in the stereo-selective total synthesis of dl-Desoxypodocarpic acid (ii), has been shown by both degradative and synthetic pathways to rearrange in the presence of base to diosphenol E-1 (5-isoabietic acid series). The exact spatial arrangements of the systems represented by formulas C-3 and E-1 have been investigated (as the p-bromobenzoates) by single-crystal X-ray diffraction analyses. The hydroxyketone F-1, the proposed intermediate in the rearrangement, has been synthesized. Its conversion to diosphenol E-1 has been studied, and a single-crystal analysis of the p-bromobenzoate derivative has been performed. The initially desired diosphenol C-6 has been prepared, and has been shown to be stable to the potassium t-butoxide rearrangement conditions. Oxidative cleavage of diosphenol E-1 and subsequent cyclization with the aid of polyphosphoric acid has been shown to lead to keto acid I-2 (benzobicyclo [3.3.1] nonane series) rather than keto acid H-2 (5-isoabietic acid series).