24 resultados para small scaled conveyor modules

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of the slow viscous flow of a gas past a sphere is considered. The fluid cannot be treated incompressible in the limit when the Reynolds number Re, and the Mach number M, tend to zero in such a way that Re ~ o(M^2 ). In this case, the lowest order approximation to the steady Navier-Stokes equations of motion leads to a paradox discovered by Lagerstrom and Chester. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme that takes into account certain terms in the full Navier-Stokes equations that drop out in the approximation used by the above authors. It is found however that the drag predicted by the theory does not agree with R. A. Millikan's classic experiments on sphere drag.

The whole question of the applicability of the Navier-Stokes theory when the Knudsen number M/Re is not small is examined. A new slip condition is proposed. The idea that the Navier-Stokes equations coupled with this condition may adequately describe small Reynolds number flows when the Knudsen number is not too large is looked at in some detail. First, a general discussion of asymptotic solutions of the equations for all such flows is given. The theory is then applied to several concrete problems of fluid motion. The deductions from this theory appear to interpret and summarize the results of Millikan over a much wider range of Knudsen numbers (almost up to the free molecular or kinetic limit) than hitherto Believed possible by a purely continuum theory. Further experimental tests are suggested and certain interesting applications to the theory of dilute suspensions in gases are noted. Some of the questions raised in the main body of the work are explored further in the appendices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general theory of Whitham for slowly-varying non-linear wavetrains is extended to the case where some of the defining partial differential equations cannot be put into conservation form. Typical examples are considered in plasma dynamics and water waves in which the lack of a conservation form is due to dissipation; an additional non-conservative element, the presence of an external force, is treated for the plasma dynamics example. Certain numerical solutions of the water waves problem (the Korteweg-de Vries equation with dissipation) are considered and compared with perturbation expansions about the linearized solution; it is found that the first correction term in the perturbation expansion is an excellent qualitative indicator of the deviation of the dissipative decay rate from linearity.

A method for deriving necessary and sufficient conditions for the existence of a general uniform wavetrain solution is presented and illustrated in the plasma dynamics problem. Peaking of the plasma wave is demonstrated, and it is shown that the necessary and sufficient existence conditions are essentially equivalent to the statement that no wave may have an amplitude larger than the peaked wave.

A new type of fully non-linear stability criterion is developed for the plasma uniform wavetrain. It is shown explicitly that this wavetrain is stable in the near-linear limit. The nature of this new type of stability is discussed.

Steady shock solutions are also considered. By a quite general method, it is demonstrated that the plasma equations studied here have no steady shock solutions whatsoever. A special type of steady shock is proposed, in which a uniform wavetrain joins across a jump discontinuity to a constant state. Such shocks may indeed exist for the Korteweg-de Vries equation, but are barred from the plasma problem because entropy would decrease across the shock front.

Finally, a way of including the Landau damping mechanism in the plasma equations is given. It involves putting in a dissipation term of convolution integral form, and parallels a similar approach of Whitham in water wave theory. An important application of this would be towards resolving long-standing difficulties about the "collisionless" shock.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.

In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.

This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of metal-acceptor interactions arising from M–BR3 and M–PR3 bonding is discussed with respect to reactions between first-row transition metals and N2, H2, and CO. Thermally robust, S = 1/2 (TPB)Co(H2) and (TPB)Co(N2) complexes (TPB = tris(2- (diisopropylphosphino)phenyl)borane) are described and the energetics of N2 and H2 binding are measured. The H2 and N2 ligands are bound more weakly in the (TPB)Co complexes than in related (SiP3)M(L) complexes (SiP3 = tris(2- (diisopropylphosphino)phenyl)silyl). Comparisons within and between these two ligand platforms allow for the factors that affect N2 (and H2) binding and activation to be delineated. The characterization and reactivity of (DPB)Fe complexes (DPB = bis(2- (diisopropylphosphino)phenyl)phenylborane) in the context of N2 functionalization and E–H bond addition (E = H, C, N, Si) are described. This platform allows for the one-pot transformation of free N2 to an Fe hydrazido(-) complex via an Fe aminoimide intermediate. The principles learned from the N2 chemistry using (DPB)Fe are applied to CO reduction on the same system. The preparation of (DPB)Fe(CO)2 is described as well as its reductive functionalization to generate an unprecedented Fe dicarbyne. The bonding in this highly covalent complex is discussed in detail. Initial studies of the reactivity of the Fe dicarbyne reveal that a CO-derived olefin is released upon hydrogenation. Alternative approaches to uncovering unusual reactivity using metal- acceptor interactions are described in Chapters 5 and 6, including initial studies on a new π-accepting tridentate diphosphinosulfinyl ligand and strategies for designing ligands that undergo site-selective metallation to generate heterobimetallic complexes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flies are particularly adept at balancing the competing demands of delay tolerance, performance, and robustness during flight, which invites thoughtful examination of their multimodal feedback architecture. This dissertation examines stabilization requirements for inner-loop feedback strategies in the flapping flight of Drosophila, the fruit fly, against the backdrop of sensorimotor transformations present in the animal. Flies have evolved multiple specializations to reduce sensorimotor latency, but sensory delay during flight is still significant on the timescale of body dynamics. I explored the effect of sensor delay on flight stability and performance for yaw turns using a dynamically-scaled robot equipped with a real-time feedback system that performed active turns in response to measured yaw torque. The results show a fundamental tradeoff between sensor delay and permissible feedback gain, and suggest that fast mechanosensory feedback provides a source of active damping that compliments that contributed by passive effects. Presented in the context of these findings, a control architecture whereby a haltere-mediated inner-loop proportional controller provides damping for slower visually-mediated feedback is consistent with tethered-flight measurements, free-flight observations, and engineering design principles. Additionally, I investigated how flies adjust stroke features to regulate and stabilize level forward flight. The results suggest that few changes to hovering kinematics are actually required to meet steady-state lift and thrust requirements at different flight speeds, and the primary driver of equilibrium velocity is the aerodynamic pitch moment. This finding is consistent with prior hypotheses and observations regarding the relationship between body pitch and flight speed in fruit flies. The results also show that the dynamics may be stabilized with additional pitch damping, but the magnitude of required damping increases with flight speed. I posit that differences in stroke deviation between the upstroke and downstroke might play a critical role in this stabilization. Fast mechanosensory feedback of the pitch rate could enable active damping, which would inherently exhibit gain scheduling with flight speed if pitch torque is regulated by adjusting stroke deviation. Such a control scheme would provide an elegant solution for flight stabilization across a wide range of flight speeds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RNA interference (RNAi) is a powerful biological pathway allowing for sequence-specific knockdown of any gene of interest. While RNAi is a proven tool for probing gene function in biological circuits, it is limited by being constitutively ON and executes the logical operation: silence gene Y. To provide greater control over post-transcriptional gene silencing, we propose engineering a biological logic gate to implement “conditional RNAi.” Such a logic gate would silence gene Y only upon the expression of gene X, a completely unrelated gene, executing the logic: if gene X is transcribed, silence independent gene Y. Silencing of gene Y could be confined to a specific time and/or tissue by appropriately selecting gene X.

To implement the logic of conditional RNAi, we present the design and experimental validation of three nucleic acid self-assembly mechanisms which detect a sub-sequence of mRNA X and produce a Dicer substrate specific to gene Y. We introduce small conditional RNAs (scRNAs) to execute the signal transduction under isothermal conditions. scRNAs are small RNAs which change conformation, leading to both shape and sequence signal transduction, in response to hybridization to an input nucleic acid target. While all three conditional RNAi mechanisms execute the same logical operation, they explore various design alternatives for nucleic acid self-assembly pathways, including the use of duplex and monomer scRNAs, stable versus metastable reactants, multiple methods of nucleation, and 3-way and 4-way branch migration.

We demonstrate the isothermal execution of the conditional RNAi mechanisms in a test tube with recombinant Dicer. These mechanisms execute the logic: if mRNA X is detected, produce a Dicer substrate targeting independent mRNA Y. Only the final Dicer substrate, not the scRNA reactants or intermediates, is efficiently processed by Dicer. Additional work in human whole-cell extracts and a model tissue-culture system delves into both the promise and challenge of implementing conditional RNAi in vivo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The particulate methane monooxygenase (pMMO) catalyzes the oxidation of methane to methanol under ambient temperatures and pressures. Other small alkanes and alkenes are also substrates of this enzyme. We measured and compared the initial rate constants of oxidation of small alkanes (C1 to C5) catalyzed by pMMO. Both primary and secondary alcohols were formed from oxidation of n-butane and n-pentane. The alcohols produced from alkane oxidation can be further oxidized, probably by pMMO, to aldehydes and ketones. The apparent regioselectivity for n-butane and n-pentane is 100% 2-alcohols because the formation of primary alcohols is slower than further oxidation of these alcohols. The hydroxylation at the secondary carbons is highly stereoselective: (R)-alcohols are preferentially formed. The enantiomeric excess increases slightly with decreasing reaction temperature. The steric course of hydroxylation on primary carbons was also studied by using isotopically substituted ethane: (S)- or (R)-CH_3-CHDT, and (S)- or (R)-CD_3- CHDT and the reactions were found to proceed with 100% retention of configuration. A primary isotopic effect of k_H/k_D=5.0 was observed in these experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a hungry fruit fly, locating and landing on a fermenting fruit where it can feed, find mates, and lay eggs, is an essential and difficult task requiring the integration of both olfactory and visual cues. Understanding how flies accomplish this will help provide a comprehensive ethological context for the expanding knowledge of their neural circuits involved in processing olfaction and vision, as well as inspire novel engineering solutions for control and estimation in computationally limited robotic applications. In this thesis, I use novel high throughput methods to develop a detailed overview of how flies track odor plumes, land, and regulate flight speed. Finally, I provide an example of how these insights can be applied to robotic applications to simplify complicated estimation problems. To localize an odor source, flies exhibit three iterative, reflex-driven behaviors. Upon encountering an attractive plume, flies increase their flight speed and turn upwind using visual cues. After losing the plume, flies begin zigzagging crosswind, again using visual cues to control their heading. After sensing an attractive odor, flies become more attracted to small visual features, which increases their chances of finding the plume source. Their changes in heading are largely controlled by open-loop maneuvers called saccades, which they direct towards and away from visual features. If a fly decides to land on an object, it begins to decelerate so as to maintain a stereotypical ratio of expansion to retinal size. Once they reach a stereotypical distance from the target, flies extend their legs in preparation for touchdown. Although it is unclear what cues they use to trigger this behavior, previous studies have indicated that it is likely under visual control. In Chapter 3, I use a nonlinear control theoretic analysis and robotic testbed to propose a novel and putative mechanism for how a fly might visually estimate distance by actively decelerating according to a visual control law. Throughout these behaviors, a common theme is the visual control of flight speed. Using genetic tools I show that the neuromodulator octopamine plays an important role in regulating flight speed, and propose a neural circuit for how this controller might be implemented in the flies brain. Two general biological and engineering principles are evident across my experiments: (1) complex behaviors, such as foraging, can emerge from the interactions of simple independent sensory-motor modules; (2) flies control their behavior in such a way that simplifies complex estimation problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem is to calculate the attenuation of plane sound waves passing through a viscous, heat-conducting fluid containing small spherical inhomogeneities. The attenuation is calculated by evaluating the rate of increase of entropy caused by two irreversible processes: (1) the mechanical work done by the viscous stresses in the presence of velocity gradients, and (2) the flow of heat down the thermal gradients. The method is first applied to a homogeneous fluid with no spheres and shown to give the classical Stokes-Kirchhoff expressions. The method is then used to calculate the additional viscous and thermal attenuation when small spheres are present. The viscous attenuation agrees with Epstein's result obtained in 1941 for a non-heat-conducting fluid. The thermal attenuation is found to be similar in form to the viscous attenuation and, for gases, of comparable magnitude. The general results are applied to the case of water drops in air and air bubbles in water.

For water drops in air the viscous and thermal attenuations are camparable; the thermal losses occur almost entirely in the air, the thermal dissipation in the water being negligible. The theoretical values are compared with Knudsen's experimental data for fogs and found to agree in order of magnitude and dependence on frequency. For air bubbles in water the viscous losses are negligible and the calculated attenuation is almost completely due to thermal losses occurring in the air inside the bubbles, the thermal dissipation in the water being relatively small. (These results apply only to non-resonant bubbles whose radius changes but slightly during the acoustic cycle.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complexity in the earthquake rupture process can result from many factors. This study investigates the origin of such complexity by examining several recent, large earthquakes in detail. In each case the local tectonic environment plays an important role in understanding the source of the complexity.

Several large shallow earthquakes (Ms > 7.0) along the Middle American Trench have similarities and differences between them that may lead to a better understanding of fracture and subduction processes. They are predominantly thrust events consistent with the known subduction of the Cocos plate beneath N. America. Two events occurring along this subduction zone close to triple junctions show considerable complexity. This may be attributable to a more heterogeneous stress environment in these regions and as such has implications for other subduction zone boundaries.

An event which looks complex but is actually rather simple is the 1978 Bermuda earthquake (Ms ~ 6). It is located predominantly in the mantle. Its mechanism is one of pure thrust faulting with a strike N 20°W and dip 42°NE. Its apparent complexity is caused by local crustal structure. This is an important event in terms of understanding and estimating seismic hazard on the eastern seaboard of N. America.

A study of several large strike-slip continental earthquakes identifies characteristics which are common to them and may be useful in determining what to expect from the next great earthquake on the San Andreas fault. The events are the 1976 Guatemala earthquake on the Motagua fault and two events on the Anatolian fault in Turkey (the 1967, Mudurnu Valley and 1976, E. Turkey events). An attempt to model the complex P-waveforms of these events results in good synthetic fits for the Guatemala and Mudurnu Valley events. However, the E. Turkey event proves to be too complex as it may have associated thrust or normal faulting. Several individual sources occurring at intervals of between 5 and 20 seconds characterize the Guatemala and Mudurnu Valley events. The maximum size of an individual source appears to be bounded at about 5 x 1026 dyne-cm. A detailed source study including directivity is performed on the Guatemala event. The source time history of the Mudurnu Valley event illustrates its significance in modeling strong ground motion in the near field. The complex source time series of the 1967 event produces amplitudes greater by a factor of 2.5 than a uniform model scaled to the same size for a station 20 km from the fault.

Three large and important earthquakes demonstrate an important type of complexity --- multiple-fault complexity. The first, the 1976 Philippine earthquake, an oblique thrust event, represents the first seismological evidence for a northeast dipping subduction zone beneath the island of Mindanao. A large event, following the mainshock by 12 hours, occurred outside the aftershock area and apparently resulted from motion on a subsidiary fault since the event had a strike-slip mechanism.

An aftershock of the great 1960 Chilean earthquake on June 6, 1960, proved to be an interesting discovery. It appears to be a large strike-slip event at the main rupture's southern boundary. It most likely occurred on the landward extension of the Chile Rise transform fault, in the subducting plate. The results for this event suggest that a small event triggered a series of slow events; the duration of the whole sequence being longer than 1 hour. This is indeed a "slow earthquake".

Perhaps one of the most complex of events is the recent Tangshan, China event. It began as a large strike-slip event. Within several seconds of the mainshock it may have triggered thrust faulting to the south of the epicenter. There is no doubt, however, that it triggered a large oblique normal event to the northeast, 15 hours after the mainshock. This event certainly contributed to the great loss of life-sustained as a result of the Tangshan earthquake sequence.

What has been learned from these studies has been applied to predict what one might expect from the next great earthquake on the San Andreas. The expectation from this study is that such an event would be a large complex event, not unlike, but perhaps larger than, the Guatemala or Mudurnu Valley events. That is to say, it will most likely consist of a series of individual events in sequence. It is also quite possible that the event could trigger associated faulting on neighboring fault systems such as those occurring in the Transverse Ranges. This has important bearing on the earthquake hazard estimation for the region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A hydromechanical theory is developed for cycloidal propellers for two limiting modes of operation wherein U » ΩR and U « ΩR, with U the rectilinear propeller speed (speed of advance) and ΩR the rotational blade speed. A first order theory is developed from the basic principles of the kinematics and dynamics of fluid motion and proceeds from the point of view of unsteady hydrofoil theory.

Explicit expressions for the instantaneous forces and moments produced by blade motions are presented. On the basis of these results an optimization procedure is carried out which minimizes the energy loss under the constraint of specified mean thrust. Under optimal conditions the propeller is found to possess high Froude efficiencies in both the high and low speed modes of propulsion. This efficiency is defined as the ratio of the average useful work obtained during one cycle of propeller operation to the average power input required to sustain the motion of the propeller during the cycle.